A Web-Server in Forth

Bernd Paysan

Abstract:

An HTTP-Server in Gforth is presented as an opportunity to show that you can do string-oriented things with Forth as well. The development time (a few hours) shows that Forth is an appropriate tool for this kind of work and delivers fast results. This is a translation of the paper presented at the Forth Tagung 2000 conference in Hamburg (proofreading and corrections by Chris Jakeman).

Since I have always given bigFORTH/MINOS-related presentations in the last few years, I'll do something with Gforth this time. Gforth is another tool you can do neat things with, and in contrast to what you here elsewhere, Forth is suitable for almost anything. Even a web-server.

In this age of the ``new economy'', the Internet is important. Everybody is ``in there'' except Forth, which hides in the embedded control niche. There isn't any serious reason for that. The following code was created in just a few hours of work and mostly operates on strings. The old prejudice, that Forth was good at biting bits, but has troubles with strings, is thus disproved.

What do you need a web-server for in Forth? Forth is used for measurement and control in remote locations such as the sea-bed or the crater of a volcano. Less remotely, Forth may be used in a refrigerator and, if that stops working, things soon get messy. So a communication thingy is built in.

How much better would it be if instead of "some communication thingy built in", there was a standard protocol. HTTP is accessible from the web-cafe in Mallorca, or from mobile yuppie toys such as PDAs or cell phones. Perhaps one should build such a web-server into each stove and into the bath, so that people can use their cell phone on holidays to check repeatedly (every three minutes?) if they really turned their stove off.

Anyway, the customer, boss or whoever buys the product, wants to hear that there is some Internet-thingy build in, especially if one isn't in e-Business already. And the costs must be zero too.

But let's take this slowly, step by step.

Actually, you had to study the RFC1-documents. The RFCs in question are RFC 945 (HTTP/1.0) and RFC 2068 (HTTP/1.1), which both refer to other RFCs. Since these documents alone are much longer than the source code presented below (and reading them would take longer than writing the sources), we will defer that for later. The web server thus won't be 100% RFC conforming (i.e. implement all features), and conforms only as far as necessary for a typical client like Netscape. However additions are easy to achieve.

A typical HTTP-Request looks like this:

GET /index.html HTTP/1.1

Host: www.paysan.nom

Connection: close



HTTP/1.1 200 OK

Date: Tue, 11 Apr 2000 22:27:42 GMT

Server: Apache/1.3.12 (Unix) (SuSE/Linux)

Connection: close

Content-Type: text/html



<HTML>

...

/etc/inetd.conf

# Gforth web server

gforth stream tcp nowait.10000 wwwrun /usr/users/bernd/bin/httpd

/etc/services

gforth 4444/tcp # Gforth web server

killall -HUP inetd

#!

#! /usr/local/bin/gforth



warnings off

include string.fs

Variable url \ stores the URL (string)

Variable posted \ stores arguments of POST (string)

Variable url-args \ stores arguments in the URL (string)

Variable protocol \ stores the protocol (string)

Variable data \ true, when data is returned

Variable active \ true for POST

Variable command? \ true in the request line

Since we can process a request only once the whole header has been parsed, we save all the information. Therefore we define two small words which take a word representing the rest of a line and store it in a string variable:

: get ( addr -- ) name rot $! ;

: get-rest ( addr -- )

source >in @ /string dup >in +! rot $! ;

wordlist constant values

wordlist constant commands

\ HTTP URL rework



: rework-% ( add -- ) { url } base @ >r hex

0 url $@len 0 ?DO

url $@ drop I + c@ dup '% = IF

drop 0. url $@ I 1+ /string

2 min dup >r >number r> swap - >r 2drop

ELSE 0 >r THEN over url $@ drop + c! 1+

r> 1+ +LOOP url $!len

r> base ! ;

: rework-? ( addr -- )

dup >r $@ '? $split url-args $! nip r> $!len ;

: >values values 1 set-order command? off ;

: get-url ( -- ) url get protocol get-rest

url rework-? url rework-% >values ;

commands set-current



: GET get-url data on active off ;

: POST get-url data on active on ;

: HEAD get-url data off active off ;

CREATE-DOES>

Fortunately, Gforth provides nextname , an appropriate tool for this. We construct exactly the string we need and call VARIABLE and CREATE afterwards.

: value: ( -- ) name

definitions 2dup 1- nextname Variable

values set-current nextname here cell - Create ,

definitions DOES> @ get-rest ;

value: User-Agent:

value: Pragma:

value: Host:

value: Accept:

value: Accept-Encoding:

value: Accept-Language:

value: Accept-Charset:

value: Via:

value: X-Forwarded-For:

value: Cache-Control:

value: Connection:

value: Referer:

value: Content-Type:

value: Content-Length:

Now we must parse the request. This should be completely trivial, we could just let the Forth interpreter chew it but for one little caveat:

Each line ends with CR LF, while Gforth under Unix expects lines to end with an LF only. We thus must remove the CR. And each header ends with an empty line, not some executable Forth word. We must therefore read line by line with refill , remove CRs from the line end, and then check if the line was empty.

Variable maxnum



: ?cr ( -- )

#tib @ 1 >= IF source 1- + c@ #cr = #tib +! THEN ;

: refill-loop ( -- flag )

BEGIN refill ?cr WHILE interpret >in @ 0= UNTIL

true ELSE maxnum off false THEN ;

INCLUDED

: get-input ( -- flag ior )

s" /nosuchfile" url $! s" HTTP/1.0" protocol $!

s" close" connection $!

infile-id push-file loadfile ! loadline off blk off

commands 1 set-order command? on ['] refill-loop catch

POST

Content-Length:

active @ IF s" " posted $! Content-Length $@ snumber? drop

posted $!len posted $@ infile-id read-file throw drop

THEN only forth also pop-file ;

OK, we've handled a request, and now we must respond. The path of the URL is unfortunately not as we want it; we want to be somehow Apache-compatible, i.e. we have a ``global document root'' and a variable in the home directory of each user, where he can put his personal home page. Thus we can't do anything else but look at the URL again and finally check, if the requested file really is available:

Variable htmldir



: rework-htmldir ( addr u -- addr' u' / ior )

htmldir $!

htmldir $@ 1 min s" ~" compare 0=

IF s" /.html-data" htmldir dup $@ 2dup '/ scan

nip - nip $ins

ELSE s" /usr/local/httpd/htdocs/" htmldir 0 $ins THEN

htmldir $@ 1- 0 max + c@ '/ = htmldir $@len 0= or

IF s" index.html" htmldir dup $@len $ins THEN

htmldir $@ file-status nip ?dup ?EXIT

htmldir $@ ;

: >mime ( addr u -- mime u' ) 2dup tuck over + 1- ?DO

I c@ '. = ?LEAVE 1- -1 +LOOP /string ;

: >file ( addr u -- size fd )

r/o bin open-file throw >r

r@ file-size throw drop

." Accept-Ranges: bytes" cr

." Content-Length: " dup 0 .r cr r> ;

: transparent ( size fd -- ) { fd }

$4000 allocate throw swap dup 0 ?DO

2dup over swap $4000 min fd read-file throw type

$4000 - $4000 +LOOP drop

free fd close-file throw throw ;

transparent

TYPE

maxnum

: .connection ( -- )

." Connection: "

connection $@ s" Keep-Alive" compare 0= maxnum @ 0> and

IF connection $@ type cr

." Keep-Alive: timeout=15, max=" maxnum @ 0 .r cr

-1 maxnum +! ELSE ." close" cr maxnum off THEN ;

transparent:

transparent

: transparent: ( addr u -- ) Create here over 1+ allot place

DOES> >r >file

.connection

." Content-Type: " r> count type cr cr

data @ IF transparent ELSE nip close-file throw THEN ;

/etc/mime.types

: mime-read ( addr u -- ) r/o open-file throw

push-file loadfile ! 0 loadline ! blk off

BEGIN refill WHILE name

BEGIN >in @ >r name nip WHILE

r> >in ! 2dup transparent: REPEAT

2drop rdrop

REPEAT loadfile @ close-file pop-file throw ;

: lastrequest

." Connection: close" cr maxnum off

." Content-Type: text/html" cr cr ;

shtml

included

text/plain

wordlist constant mime

mime set-current



: shtml ( addr u -- ) lastrequest

data @ IF included ELSE 2drop THEN ;



s" application/pgp-signature" transparent: sig

s" application/x-bzip2" transparent: bz2

s" application/x-gzip" transparent: gz

s" /etc/mime.types" mime-read



definitions



s" text/plain" transparent: txt

Sometimes a request goes wrong. We must be prepared for that and respond with an appropriate error message to the client. The client wants to know which protocol we speak, what happened (or if everything is OK), who we are, and in the error case, a error report in plain text (coded in HTML) would be nice:

: .server ( -- ) ." Server: Gforth httpd/0.1 ("

s" os-class" environment? IF type THEN ." )" cr ;

: .ok ( -- ) ." HTTP/1.1 200 OK" cr .server ;

: html-error ( n addr u -- )

." HTTP/1.1 " 2 pick . 2dup type cr .server

2 pick &405 = IF ." Allow: GET, HEAD, POST" cr THEN

lastrequest

." <HTML><HEAD><TITLE>" 2 pick . 2dup type

." </TITLE></HEAD>" cr

." <BODY><H1>" type drop ." </H1>" cr ;

: .trailer ( -- )

." <HR><ADDRESS>Gforth httpd 0.1</ADDRESS>" cr

." </BODY></HTML>" cr ;

: .nok ( -- ) command? @ IF &405 s" Method Not Allowed"

ELSE &400 s" Bad Request" THEN html-error

." <P>Your browser sent a request that this server "

." could not understand.</P>" cr

." <P>Invalid request in: <CODE>"

error-stack cell+ 2@ swap type

." </CODE></P>" cr .trailer ;

: .nofile ( -- ) &404 s" Not Found" html-error

." <P>The requested URL <CODE>" url $@ type

." </CODE> was not found on this server</P>" cr .trailer ;

We are almost done now. We simply glue together all the pieces above to process a request in sequence - first fetch the input, then transform the URL, recognize the MIME type, work on it including error exits and default paths. We need to flush the output, so that the next request doesn't stall. And do that all over again times, until we reach the last request.

: http ( -- ) get-input IF .nok ELSE

IF url $@ 1 /string rework-htmldir

dup 0< IF drop .nofile

ELSE .ok 2dup >mime mime search-wordlist

0= IF ['] txt THEN catch IF maxnum off THEN

THEN THEN THEN outfile-id flush-file throw ;



: httpd ( n -- ) maxnum !

BEGIN ['] http catch maxnum @ 0= or UNTIL ;

script? [IF] :noname &100 httpd bye ; is bootmessage [THEN]

As a special bonus, we can process active content. That's really simple: We just write our HTML file as usual and indicate the Forth code with ``$>, and to get the whole thing started, <HTML> :

: $> ( -- )

BEGIN source >in @ /string s" <$" search 0= WHILE

type cr refill 0= UNTIL EXIT THEN

nip source >in @ /string rot - dup 2 + >in +! type ;

: <HTML> ( -- ) ." <HTML>" $> ;

<HTML>

<HEAD>

<TITLE>GForth <$ version-string type $> presents</TITLE>

</HEAD>

<BODY>

<H1>Computing Primes</H1><$ 25 Constant #prim $>

<P>The first <$ #prim . $> primes are: <$

: prim? 0 over 2 max 2 ?DO over I mod 0= or LOOP nip 0= ;

: prims ( n -- ) 0 swap 2

swap 0 DO dup prim? IF swap IF ." , " THEN true swap

dup 0 .r 1+ 1 ELSE 1+ 0 THEN

+LOOP drop ;

#prim prims $> .</P>

</BODY>

</HTML>

That was a few hundred lines of code - far too much. I have delivered an ``almost'' complete Apache clone. That won't be necessary for the sea-bed or the refrigerator. Error handling is ballast, too. And if you restrict to single connection (performance isn't the goal), you can ignore all the protocol variables. One MIME type (text/html) is sufficient -- we keep the images on another server. There is some hope that one can get a working HTTP protocol with server-side scripting in one screen.

Certainly we need some string functions, it doesn't work without. The following string library stores strings in ordinary variables, which then contain a pointer to a counted string stored allocated from the heap. Instead of a count byte, there's a whole count cell, sufficient for all normal use. The string library originates from bigFORTH and I've ported it to Gforth (ANS Forth). But now we consider the details of the functions. First we need two words bigFORTH already provides:

: delete ( addr u n -- )

over min >r r@ - ( left over ) dup 0>

IF 2dup swap dup r@ + -rot swap move THEN + r> bl fill ;

delete

: insert ( string length buffer size -- )

rot over min >r r@ - ( left over )

over dup r@ + rot move r> move ;

insert

Now we can really start:

: $padding ( n -- n' )

[ 6 cells ] Literal + [ -4 cells ] Literal and ;

$padding

: $! ( addr1 u addr2 -- )

dup @ IF dup @ free throw THEN

over $padding allocate throw over ! @

over >r rot over cell+ r> move 2dup ! + cell+ bl swap c! ;

$!

: $@ ( addr1 -- addr2 u ) @ dup cell+ swap @ ;

$@

: $@len ( addr -- u ) @ @ ;

$@len

: $!len ( u addr -- )

over $padding over @ swap resize throw over ! @ ! ;

$!len

: $del ( addr off u -- ) >r >r dup $@ r> /string r@ delete

dup $@len r> -- swap $!len ;

$del

: $ins ( addr1 u addr2 off -- ) >r

2dup dup $@len rot + swap $!len $@ 1+ r> /string insert ;

$ins

: $+! ( addr1 u addr2 -- ) dup $@len $ins ;

$+!

: $off ( addr -- ) dup @ free throw off ;

$off

As a bonus, there are functions to split strings up.

: $split ( addr u char -- addr1 u1 addr2 u2 )

>r 2dup r> scan dup >r dup IF 1 /string THEN

2swap r> - 2swap ;

$split

: $iter ( .. $addr char xt -- .. ) { char xt }

$@ BEGIN dup WHILE char $split >r >r xt execute r> r>

REPEAT 2drop ;

$iter

Footnotes

Request For Comments -- Internet standards documents are all named like this.

Bernd Paysan

2000-07-22