Yesterday I wrote a small http client library starting from bare node.js
net module to make a TCP connection, and in a series of steps built up a
working client for a very basic dialect of HTTP/1.1. It’s built to have some
similar design decisions to node’s core
http module, just for real-world
I did this as a teaching exercise for a friend – he watched me work it up in a shared terminal window – but I think it’s interesting as a learning example.
For background, this relies on core concepts from node.js: streams, and
connecting to a TCP port with the
net module; I’m not doing anything
complicated with net, and almost entirely ignoring errors.
I start almost every project with a similar start:
cd !$ then
npm init and mostly accept the
defaults. I usually set up the test script as a simple
tap test.js. After the
init, I run
npm install --save-dev tap. tap is my favorite test framework,
because it has pretty excellent diagnostics when a test fails, runs each test
file in a separate process so they can’t interfere with each other so easily,
and because you can just run the test file itself as a plain node script and
get reasonable output. There’s no magic in it. (the TAP protocol is pretty
Next, I created just enough to send a request and a test for it. The actual HTTP protocol is simple. Not as simple as it once was, but here’s the essence of it:
That’s enough to fetch a page called
/a-thing-i-want from the server at
example.org. That’s the equivalent of
the browser. There’s a lot more that could be added to a request – browsers
add the user-agent string, what language you prefer, and all kinds of
information. I’ve added the
Accept: header to this demo, which is what we’d
send if we want to suggest that the server should send us HTML.
The server will respond in kind:
That may not come in all at once – the Internet is full of weird computers that are low on memory, and networks that can only send a bit at a time. Since node.js has streams, we get things as they come in, and we have to assemble the pieces ourselves. They come in in order, though, so it’s not too hard. It does complicate making a protocol handler like this, but it does give us the chance to make a very low-memory, efficient processor for HTTP messages.
So if we get that back all as a chunk as users of our HTTP library, that’s not that useful – nobody wants to see the raw HTTP headers splattered at the top of every web page. (Okay, I might. But only because I love seeing under the hood. It’d be like having a transparent lock so you can see the workings. Not so great for everyday use.)
What we have to do to is read off the pieces a bit at a time and do the work needed to break the header up into lines.
There are several cases that might happen:
- We got a part of a header line
- We got a complete header line and part of another
- We got a complete header line and nothing else
- We got a complete header line, and the newline that ends the header section
- We got a complete response all at once
- We got a complete header, and part of the body
- Having already received a part of a header, we get another part of a header
- Having already received a part of a header, we get the remainder and more…
And so on. There’s a lot of ways things can and will be broken up depending on where the packet boundaries fall in the stuff we care about. We have to handle it all.
The best approach is starting at the beginning, and see if you have a complete thing. If not, store it for later and wait for more. If you do have a complete thing, process it, take that chunk off the beginning of what you’re processing, and loop this process and see if there’s more. Repeat until complete or you have an error. That’s what the first part of the header parser does.
That first pass at the problem was a little naive and doesn’t stop at the end of the header properly. So next we put in a temporary hack to put that missing chunk somewhere.
So we’ve got the headers stored as a list, which is great. An object would be better, so we can access them by name. let’s postprocess the headers into an object.