Port numbers and URLs

Today someone asked on the node.js mailing list why the URL that Express.js gave them to access their application had a port number in it, and if they could get rid of it (since other sites don’t have it.)

My explanation is this:

There are some interesting details to this!

Each service on the Internet has a port assigned to it by a group called IANA. http is port 80, ssh is 22, https is 443, xmpp is 5222 (and a few others, because it’s complicated), pop3 is 110 and imap is 143. If the service is running on its normal port, things don’t usually need to know the port because it can just assume the usual one. In http URLs, this lets us leave the port number out — http://example.org/ and http://example.org:80/ in theory identify the same thing. Some systems treat them as ‘different’ when comparing, but they access the same resource.

Now if you’re not on the default port, you have to specify — so Express apps in particular suggest you access http://localhost:8080/ (or 3000 — there’s a couple common ports for “this is an app fresh off of a generator, customize from here”). This is actually just a hint — usually they listen to more than localhost, and the report back for the URL is actually not very robust, but it works enough to get people off the ground while they learn to write web services.

If you run your app on port 80, you won’t need that.

However!

Unix systems restrict ports under 1024 as reserved for the system — a simple enough restriction to keep a user from starting up something in place of a system service at startup time, in the era of shared systems. That means you have to run something as root to bind port 80, unless you use special tools. There’s one called authbind that lets you bind a privileged port (found most commonly on Debian-derived Linuxes), one can call process.setuid and process.setgid to relinquish root privilege after binding (a common tactic in classic unix systems), though there’s some fiddly details there that could leave you exposed if someone manages to inject executable code into what you’re running. And finally, one can proxy from a ‘trusted’ system daemon to your app on some arbitrary port — nginx is a popular choice for this, as are haproxy, stunnel and others.

Now as to why it’s just a hint: the problem of an app figuring out its own URL(s) is actually very hard, unsolvable often even in simple cases, given the myriad of things we do to networking — NAT and proxies in particular confuse this — and that there’s no requirement to be able to look up a hostname for an IP address, even if the hostname can be looked up to get the IP address. None of this matters for localhost though, which has a nice known name and a nice known IP and most people do development on their own computers, and so we can hand-wave all this complexity away until later, after someone has something up and running.

Temporal Coupling is bad

In reviewing the source to express.js I came across a reasonably compact example of temporal coupling.

This is badly factored, and I’ll lay out why:

Temporal coupling is the reliance on a certain sequence of calls or checks to function, rather than having them explicitly called in order in a function. “this, then this, then this have to be called before the state you look at here will be present” is how it works out.

the bits of application.js that call the view are the start of it — the view could be there! Or not! Make one maybe!

1
2
3
4
5
6
if (!view) {
view = new (this.get('view'))(name, {
defaultEngine: this.get('view engine'),
root: this.get('views'),
engines: engines
});

That’s reasonably well guarded, because it checks that it’s not there, and sets one up if it’s not already there. But if it was cached previously, and so already set, we’re now dependent on that state, which could have been set in an entirely different way. The only thing that saves us is that the cache is pretty well private.

Then there is the bit that then looks at an instance variable that happens to be set by the constructor in this version

1
2
3
4
5
6
7
8
if (!view.path) {
var dirs = Array.isArray(view.root) && view.root.length > 1
? 'directories "' + view.root.slice(0, -1).join('", "') + '" or "' + view.root[view.root.length - 1] + '"'
: 'directory "' + view.root + '"'
var err = new Error('Failed to lookup view "' + name + '" in views ' + dirs);
err.view = view;
return fn(err);
}

So now we’ve got temporal coupling between the view’s constructor setting an instance variable and our calling code. This error check is performed synchronously after the construction of the object, which is sad, because that coupling means that any asynchronous looking up of that path is now not available to us without hackery. This is exactly what’s being introduced in Express 5, and so this calling code has to be decoupled.

This is a minor case of temporal coupling, but those pieces of Express know way too much about each other, in ways that make refactoring it more invasive.

There’s a sort of style of programming where the inner components are written first, then the outer ones are written assuming the inner ones are append-only that I think leads to this, a sort of one-way coupling.

Contrast these two places — in the View constructor:

1
this.path = this.lookup(name);

Where the lookup method (via some convoluted path) only returns a value when the path exists on disk:

1
2
3
4
5
6
path = join(dir, basename(file, ext), 'index' + ext);
stat = tryStat(path);
if (stat && stat.isFile()) {
return path;
}

And in the render method:

1
2
3
View.prototype.render = function render(options, fn) {
this.engine(this.path, options, fn);
};

So now the render method is only safe to call if this.path is set, and we’re temporally coupled to this sequence:

1
2
3
4
new View(args);
if (view.path) {
view.render(renderArgs)
}

Without that sequence — instantiate, check for errors, render if good or error if not — it’ll explode, having never validated that this.path is set..

It’s okay to temporally couple to instantiation in general — it’s not like you can call a method without an instance, not sensibly — but to that error check being required by the outside caller? That’s a terrible convention, and the whole thing would be much better enveloped in a method that spans the whole process — and in this case, an asynchronous one, so that the I/O done validating that the path exists doesn’t have to be synchronous.

So to fix this case, what I would do is to refactor the render method to include all the checks — move the error handling out of the caller, into render or something called by it. In this case, the lookup method is a prime candidate, since it’s what determines whether something exists, and the error concerns whether or not it exists.

Handling Errors in node.js

There are roughly four kinds of errors you run into in node.

synchronous code, and throw is usually limited to application logic, synchronous decisions being made from information already on hand. They can also arise from programmer error — accessing properties or functions of undefined are among the most common errors I see.

If you are calling a callback in an asychronous context provided by another module or user, it’s smart to guard these with try/catch blocks, and direct the error into your own error emission path.

The naive implementation can fail badly:

1
2
3
4
5
6
function doAThing(intermediateCallback, doneCallback) {
setImmediate(function () {
var result = intermediateCallback('someValue');
doneCallback(null, result);
});
}

The above will crash if intermediateCallback throws an exception. Instead, guard this:

1
2
3
4
5
6
7
8
9
10
function doAThing(intermediateCallback, doneCallback) {
setImmediate(function () {
try {
var result = intermediateCallback('someValue');
doneCallback(null, result);
} catch (e) {
doneCallback(e);
}
});
}

This is important since a synchronous throw in an asynchronously called function ends up becoming the next kind of error:

asynchronous calls and throw will crash your process. If you’re using domains, then it will fall back to the domain error handler, but in both cases, this is either uncatchable — a try/catch block will have already exited the block before the call is made — or you are completely without context when you catch it, so you won’t be able to usefully clean up resources allocated during the request that eventually failed. The only hope is to catch it in a process.on('uncaughtException handler or domain handler, clean up what you can — close or delete temp files or undo whatever is being worked on — and crash a little more cleanly.

Anything meant to be called asynchronously should never throw. Instead, callbacks should be called with an error argument: callback(new Error("Error message here")); This makes the next kind of error,

asynchronous calls with an error parameter in the callback receive the error as a parameter — either as a separate callback for errors, or in node, much more commonly the “error first” style:

1
2
3
doThing(function (err, result) {
// Handle err here if it's a thing, use result if not.
});

This forces the programmer to handle or propagate the error at each stage.

The reason the error argument is first is so that it’s hard to ignore. If your first parameter is err and you don’t use it, you are likely to crash if you get an error, since you’ll only look at the success path.

With the iferr module, you can get promise-like short-circuiting of errors:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var iferr = require('iferr');
function doThing(makeError, cb) {
setImmediate(function () {
if (makeError) {
cb(new Error('gives an error'));
} else {
cb(null, "no error!");
}
});
}
doThing(true, iferr(console.warn, function (result) {
console.log(result);
})); // This call warns with the error
doThing(false, iferr(console.warn, function (result) {
console.log(result);
})); // This call logs the "no error!" message.

Using promises also gives this short-circuit error behavior, but you get the error out of the promise with the .catch method. In some implementations, if an error happens and you haven’t set up what happens to it, it will throw after a process tick. Similarly, event emitters with unhandled error events throw an exception. This leads to the fourth kind of error:

asynchronous event emitters or promises, and error handlers

An event emitter that can emit an error event should have a handler set up.

1
2
3
4
5
6
7
emitter.on('error', function (err) {
// handle error here, or call out to other error handler
});
promise.catch(function (err) {
// Same here: handle it.
});

If you don’t do this, your process will crash or the domain handler will fire, and you should crash there. (Unless your promises don’t handle this case, in which case your error is lost and you never know it happened. Also not good.)

demands for gaming

I don’t want perfect equality. I want broad representation. Big women, little
women, interesting men, not all chiseled badasses. Fewer big breasts and lots
less being designed for male gazes. Screw casual rape not-even-storylines. Give
the strippers some agency. If they’re playable characters, make it a fuckin’
choice. Give us armor that covers our midriff if the purpose is to be accurate.
If there’s a half dozen male playable characters, make a half dozen women, a
couple trans folks and maybe a couple people who aren’t exactly either too.

Make the women not simpering. Give them a point to their existence that doesn’t
revolve around a man or a romantic plot. Make the playable characters something
other than 20-something white-or-stereotype-of-their-ethnicity. Make women who
don’t all have dark-and-horrific pasts. We can do better.

There should be hundreds or thousands of women to draw examples from, not the
same ten spanning 20 years in every instance of this argument. (Yes, really,
Metroid gets mentioned every damn time. It’s like there’s not very many
women in video games to mention. Let me tell you, Samus ain’t that
special. If you mention Metroid it had better be for the music.)

Lose the damn damsel in distress trope, or if it exists, subvert the fuck out
of that, and not just in a Princess in Another Castle way, but
doesn’t-want-to-be-rescued-thank-you or you-just-fucked-up-bigtime
I-am-not-a-prize. Give us women who are us, not just
I-guess-I’ll-play-one-of-the-few-women-playable-characters.

Give me my half-shaved-head dyed hair smart-ass. Give my friends the fat muscle
dyke. Give us a black girl who has a fucking family. Mom and dad. Give us the
guy who’s got some cojones to cry once in a while, and whose back-story isn’t
just pain pain pain until he is a rock who kills people to get revenge. Give us
an asian character or ten who never picks up a sword and never wears clothing
with a dragon on it. How about someone who’s Bengali or Punjabi or Pakistani or
Malay or Philipina and not just make asian = Chinese or Japanese. Give me
characters I can fall in love with, and not ones defined by how young and hot
you have to be to cosplay recognizably. And don’t wedge in eye candy to do it.
Put in so goddamn many women that it stops being special to say “Oooh, she’s
great” in a game and have it just be a thing that is everywhere. Give us boring
women too. Default women.

A few counterexamples don’t make it. There are hundreds and hundreds and
hundreds of male characters. Being given one or two options in each game — if
that — fucking blows.

It’s not just AAA titles, either. I was playing a stupid dungeon-crawler sort
of game this week and y’know what? All the characters are men. And it’s not a
traditional dungeon-crawler setting, it was so totally silly made up that it
wouldn’t even have damaged expectations? But y’know what? Still all men. It’s
so fucking normal that it’s invisible until you look.

How to Read Source Code

This is based on a talk I gave at Oneshot Nodeconf Christchurch.

I almost didn’t write this post. It seems preposterous that there are any programmers who don’t read source code. Then I met a bunch of programmers who don’t, and I talked to some more who wouldn’t read anything but the examples and maybe check if there are tests. And most of all, I’ve met a lot of beginning programmers who have a hard time figuring out where to start.

What are we reading for? Comprehension. Reading to find bugs, to find interactions with other software in a system. We read spource code for review. We read to see the interfaces, to undersand and to find the boundaries between the parts. We read to learn!

Reading isn’t linear. We think we can read source code like a book. Crack the introduction or README, then read through from chapter one to chapter two, on toward the conclusion. It’s not like that. We can’t even prove that a great many programs have conclusions. We skip back and forth from chapter to chapter, module to module. We can read the module straight through but we won’t have the definitions of things from other modules. We can read in execution order, but we won’t know where we’re going more than one call site down.

Do you start at the entry point of a package? In a node module, the index.js or the main script?

How about in a browser? Even finding the entry point, which files get loaded and how are a key task. Figuring out how the files relate to each other is a great place to start.

Other places to start are to find the biggest source code file and read that first, or try setting a breakpoint early and tracing down through functions in a debugger, or try setting a breakpoint deep in something meaty or hard to understand and then read each function in the call stack.

We’re used to categorizing source code by the language it’s written in, be it Javascript, C++, ES6, Befunge, Forth, or LISP. We might tackle a familiar language more easily, but not look at the parts written in a language we’re less familiar with.

There is another way to think of kinds of source code, which is to look at the broad purpose of each part. Of course, many times, something does more than one thing. Figuring out what it’s trying to be can be one of the first tasks while reading. There are a lot of ways to describe categories, but here are some:

Glue has no purpose other than to adjust interfaces between parts and bind them together. Not all the interfaces we want to use play nice together, where the output of one function can be passed directly to the input of another. Programmers make different decisions about the styles of interface, or adepters between systems where there are no rich data types, such as fields from a web form all represented as strings are connected to functions and objects that expect them to be represented more specifically. The way errors are handled often vary, too.

Connecting a function that returns a promise to something that takes a callback involves glue; inflating arguments into objects, or breaking objects apart into variables are all glue.

This is from Ben Drucker’s stream-to-promise:

1
2
3
4
5
6
internals.writable = function (stream) {
return new Promise(function (resolve, reject) {
stream.once('finish', resolve);
stream.once('error', reject);
});
};

In this, we’re looking for how two interfaces are shaped differently, and what’s common between them. The two interfaces involved are are node streams and promises.

In common they have the fact that they do work until they have a definite finish. In streams, with the finish event, and with promises by calling the resolution function. One thing to notice while you read this is that promises can only be resolved once, but streams can emit the same event multiple times. They don’t usually, but as programmers we usually know the difference between can’t and shouldn’t.

Here’s more glue, the sort you find when dealing with input from web forms.

1
2
3
4
5
var record = {
name: (input.name || '').trim(),
age: isNaN(Number(input.age)) ? null : Number(input.age),
email: validateEmail(input.email.trim())
}

In cases like this, it’s good to read with how errors are handled in mind. Look for which things in this might throw an exception, and which handle errors by altering or deleting a value.

Are these appropriate choices for the place where this exists? Do some of these conversions lose information, or are they just cleaning up into a canonical form?

Interface-defining code is one of the most important kinds. It’s what makes the outside boundary of a module, the surface area that other programmers have to interact with.

From node’s events.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
exports.usingDomains = false;
function EventEmitter() { }
exports.EventEmitter = EventEmitter;
EventEmitter.prototype.setMaxListeners = function setMaxListeners(n) { };
EventEmitter.prototype.emit = function emit(type) { };
EventEmitter.prototype.addListener = function addListener(type, listener) { };
EventEmitter.prototype.on = EventEmitter.prototype.addListener;
EventEmitter.prototype.once = function once(type, listener) { };
EventEmitter.prototype.removeListener = function removeListener(type, listener) { };
EventEmitter.prototype.removeAllListeners = function removeAllListeners(type) {};
EventEmitter.prototype.listeners = function listeners(type) { };
EventEmitter.listenerCount = function(emitter, type) { };

We’re defining the interface for EventEmitter here.

Look for whether this is complete. Look for internal details being exposed — the usingDomains in this case is a flag that is exposed to the outside world, because node domains have an effect system-wide, and debugging that is very difficult, that detail is shown outside the module.

Look for what guarantees these functions make.

Look for how namespacing works. Will the user be adding their own functions, or does this stand on its own, and the user of this interface will keep their parts separate?

Like glue code, look for how errors are handled and exposed. Is that consistent? Does it distinguish errors due to internal bugs from errors because the user made a mistake?

If you have strong interface contracts or guards, this is where you should expect to find them.

Implementation once it’s separated from the interface and the glue is one of the more studied parts of source code, and where books on refactoring and source code style aim much of their advice.

From Ember.Router:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
startRouting: function() {
this.router = this.router || this.constructor.map(K);
var router = this.router;
var location = get(this, 'location');
var container = this.container;
var self = this;
var initialURL = get(this, 'initialURL');
var initialTransition;
// Allow the Location class to cancel the router setup while it refreshes
// the page
if (get(location, 'cancelRouterSetup')) {
return;
}
this._setupRouter(router, location);
container.register('view:default', _MetamorphView);
container.register('view:toplevel', EmberView.extend());
location.onUpdateURL(function(url) {
self.handleURL(url);
});
if (typeof initialURL === "undefined") {
initialURL = location.getURL();
}
initialTransition = this.handleURL(initialURL);
if (initialTransition && initialTransition.error) {
throw initialTransition.error;
}
},

This is the sort always needs more documentation about why it is how it is, and not so much about what these parts do. Implementation source code is where the every-day decisions about how something is built live, the parts that make this module do what it does..

Look how this fits into its larger whole.

Look for what’s coming from the public interface to this module, look for what needs validation. Look for what other parts this touches — whether they share properties on an object or variables in a closure or call other functions.

Look at what would be likely to break if this gets changed, and look to the test suite to see that that is being tested.

Look for the lifetime of these variables. This particular case is an easy one: This looks really well designed and doesn’t store needless state with a long lifetime — though maybe we should look at _setupRouter next if we were reading this.

You can look to understand the process entailment of a method or function, the things that were required to set up the state, the process entailed in getting to executing this. Looking forward from potential call sites, we can ask “How much is required to use this thing correctly?”, and as we read the implementation, we can ask “If we’re here, what got us to this point? What was required to set this up so that it works right?”

Is that state explicit, passed in via parameters? Is it assumed to be there, as an instance variable or property? Is there a single path to get there, with an obvious place that state is set up, or is it diffuse?

Algorithms are a kind of special case of implementation. It’s not so exposed to the outside world, but it’s a meaty part of a program. Quite often it’s business logic or the core processes of the software, but just as often, it’s something that has to be controlled precisely to do its job with adequate speed. There’s a lot of study of algorithmic source code out there because that’s what academia produces as source code.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function Grammar(rules) {
// Processing The Grammar
//
// Here we begin defining a grammar given the raw rules, terminal
// symbols, and symbolic references to rules
//
// The input is a list of rules.
//
// The input grammar is amended with a final rule, the 'accept' rule,
// which if it spans the parse chart, means the entire grammar was
// accepted. This is needed in the case of a nulling start symbol.
rules.push(Rule('_accept', [Ref('start')]));
rules.acceptRule = rules.length - 1;
// Build a list of all the symbols used in the grammar so they can be numbered instead of referred to
// by name, and therefore their presence can be represented by a single bit in a set.
function censusSymbols() {
var out = [];
rules.forEach(function(r) {
if (!~out.indexOf(r.name)) out.push(r.name);
r.symbols.forEach(function(s, i) {
var symNo = out.indexOf(s.name);
if (!~out.indexOf(s.name)) {
symNo = out.length;
out.push(s.name);
}
r.symbols[i] = symNo;
});
r.sym = out.indexOf(r.name);
});
return out;
}
rules.symbols = censusSymbols();

This bit is from a parser engine I’ve been working on called lotsawa. Reads like a math paper, doesn’t it?

It’s been said a lot that good comments say why something is done or done that way, rather than what it’s doing. Algorithms usually need more explanation of what is going on since if they were trivial, they’d probably be built into our standard library. Quite often to get good performance out of something, the exactly what-and-how matters a lot.

One of the things that you usually need to see in algorithms is the actual data structures. This one is building a list of symbols and making sure there’s no duplicates.

Look also for hints as to the running time of the algorithm. You can see in this part, I’ve got two loops. In Big-O notation, that’s O(n * m), then you can see that there’s an indexOf inside that. That’s another loop in Javascript, so that actually adds another factor to the running time. (twice — looks like I could make this more optimal by re-using one of the values here)

Configuration The line between source code and configuration file is super thin. There’s a constant tension between having a configuration be expressive and readable and direct.

Here’s an example using Javascript for configuration.

1
2
3
4
5
6
7
app.configure('production', 'staging', function() {
app.enable('emails');
});
app.configure('test', function() {
app.disable('emails');
});

What we can run into here is combinatorial explosion of options. How many environments do we configure? Then, how many things do we configure for a specific instance of that environment. It’s really easy to go overboard and end up with all the possible permutations, and to have bugs that only show up in one of them. Keeping an eye out for how many degrees of freedom the configuration allows is super useful.

Here is bit of kraken config file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
"express": {
"env": "", // NOTE: `env` is managed by the framework. This value will be overwritten.
"x-powered-by": false,
"views": "path:./views",
"mountpath": "/"
},
"middleware": {
"compress": {
"enabled": false,
"priority": 10,
"module": "compression"
},
"favicon": {
"enabled": false,
"priority": 30,
"module": {
"name": "serve-favicon",
"arguments": [ "resolve:kraken-js/public/favicon.ico" ]
}
},

Kraken took a ‘low power language’ approach to configuration and chose JSON. A little more “configuration” and a little less “source code”. One of the goals was keeping that combinatorial explosion under control. There’s a reason a lot of tools use simple key-value pairs or ini-style files for configuration, even though they’re not terribly expressive. It’s possible to write config files for kraken that vary with a bunch of parameters, but it’s work and pretty obvious when you read it.

Configuration has some interesting and unique constraints that are worth looking for.

The lifetime of a configuration value is often determined by other groups of people. They usually vary somewhat independently of the rest of the source code — hence why they’re not built in as hard-coded values inline.

They often need machine writability, to support configuration-generation tools.

The responsible people are different than regular source code. Systems engineers, operations and other people can be involved in the creation.

Configuration values often have to fit in weird places like environment variables, where there are no types, just string values.

They also often store security-sensitive information, and so won’t be committed to version control because of this.

Batches are an interesting case as well. They need transactionality. Often, some piece of the system needs to happen exactly once, and not at all if there’s an error. A compiler that leaves bad build products around is a great source of bugs. Double charging customers is bad. Flooding someone’s inbox because of a retry cycle is terrible. Look for how transactions are started and finished — clean-up processes, commit to permanent storage processes, the error handling.

Batch processes often need resumabilty. A need to continue where they left off given the state of the system. Look for the places where perhaps unfinished state is picked up and continued from.

Batch processes are also often sequential. If they’re not strictly linear processes, there’s usually a very directed flow through the program. Loops tend to be big ones, around the whole process. Look for those.

Reading Messy Code

So how do you deal with this?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
DuplexCombination.prototype.on = function(ev, fn) {
switch (ev) {
case 'data':
case 'end':
case 'readable':
this.reader.on(ev, fn);
return this
case 'drain':
case 'finish':
this.writer.on(ev, fn);
return this
default:
return Duplex.prototype.on.call(this, ev, fn);
}
};

You are seeing that right. That’s reverse indendation. Blame Isaac.

Put on your rose tinted glasses!

Try installing a tool like standard or jsfmt. Here’s what standard -F dc.js does to that reverse-indented Javascript:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
DuplexCombination.prototype.on = function (ev, fn) {
switch (ev) {
case 'data':
case 'end':
case 'readable':
this.reader.on(ev, fn)
return this
case 'drain':
case 'finish':
this.writer.on(ev, fn)
return this
default:
return Duplex.prototype.on.call(this, ev, fn)
}
}

It’s okay to use tools while reading! There’s no technique that’s “cheating”.

Here’s another case:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(function(t,e){if(typeof define==="function"&&define.amd){define(["underscore","
jquery","exports"],function(i,r,s){t.Backbone=e(t,s,i,r)})}else if(typeof export
s!=="undefined"){var i=require("underscore");e(t,exports,i)}else{t.Backbone=e(t,
{},t._,t.jQuery||t.Zepto||t.ender||t.$)}})(this,function(t,e,i,r){var s=t.Backbo
ne;var n=[];var a=n.push;var o=n.slice;var h=n.splice;e.VERSION="1.1.2";e.$=r;e.
noConflict=function(){t.Backbone=s;return this};e.emulateHTTP=false;e.emulateJSO
N=false;var u=e.Events={on:function(t,e,i){if(!c(this,"on",t,[e,i])||!e)return t
his;this._events||(this._events={});var r=this._events[t]||(this._events[t]=[]);
r.push({callback:e,context:i,ctx:i||this});return this},once:function(t,e,r){if(
!c(this,"once",t,[e,r])||!e)return this;var s=this;var n=i.once(function(){s.off
(t,n);e.apply(this,arguments)});n._callback=e;return this.on(t,n,r)},off:functio
n(t,e,r){var s,n,a,o,h,u,l,f;if(!this._events||!c(this,"off",t,[e,r]))return thi
s;if(!t&&!e&&!r){this._events=void 0;return this}o=t?[t]:i.keys(this._events);fo
r(h=0,u=o.length;h<u;h++){t=o[h];if(a=this._events[t]){this._events[t]=s=[];if(e
||r){for(l=0,f=a.length;l<f;l++){n=a[l];if(e&&e!==n.callback&&e!==n.callback._ca
llback||r&&r!==n.context){s.push(n)}}}if(!s.length)delete this._events[t]}}retur
n this},trigger:function(t){if(!this._events)return this;var e=o.call(arguments,
1);if(!c(this,"trigger",t,e))return this;var i=this._events[t];var r=this._event
s.all;if(i)f(i,e);if(r)f(r,arguments);return this},stopListening:function(t,e,r)
{var s=this._listeningTo;if(!s)return this;var n=!e&&!r;if(!r&&typeof e==="objec

Here’s the start of that after uglifyjs -b < backbone-min.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(function(t, e) {
if (typeof define === "function" && define.amd) {
define([ "underscore", "jquery", "exports" ], function(i, r, s) {
t.Backbone = e(t, s, i, r);
});
} else if (typeof exports !== "undefined") {
var i = require("underscore");
e(t, exports, i);
} else {
t.Backbone = e(t, {}, t._, t.jQuery || t.Zepto || t.ender || t.$);
}
})(this, function(t, e, i, r) {
var s = t.Backbone;
var n = [];
var a = n.push;
var o = n.slice;
var h = n.splice;
e.VERSION = "1.1.2";
e.$ = r;
e.noConflict = function() {

Human parts and guessing the intent of what you’re reading

There’s a lot of tricks for figuring out what the author of something meant.

Look for guards and coercions

1
if (typeof arg != 'number') throw new TypeError("arg must be a number");

Looks like the domain of whatever function we’re in is ‘numbers’.

1
arg = Number(arg)

This coerces its input to be numeric. Same domain as above, but doesn’t reject errors via exceptions. There might be NaNs though. Probably smart to read and check if there’s comparisons that will be false against those.

NaN behavior in javascript mostly comes from the behavior in the IEEE floating-point number spec, as a way to propagate errors out to the end of a computation so you don’t get an arbitrary bogus result, and instead get a known-bad value. In some cases, that’s exactly the technique you want.

Look for defaults

1
arg = arg || {}

Default to an empty object.

1
arg = (arg == null ? true : arg)

Default to true only if a value wasn’t explicitly passed. Comparison to null with the == operator in Javascript is only true when what’s being compared is null or undefined — the two things that mean “nothing to see here” — this particular check hints that the author meant that any value is acceptable, as long as it was intended to be a value. false and 0 are both things that would override the default.

1
arg = (typeof arg == 'function' ? arg : function () {});

In this case, the guard uses a typeof check, and chooses to ignore its argument if it’s not the right type. A silent ignoring of what the caller specified.

Look for layers

As an example, req and res from Express are tied to the web; how deep do they go? Are they passed down into every layer, or is there some glue that picks out specific values and calls functions with an interface directly related to its purpose?

Look for tracing

Are there inspection points?

Debug logs?

Do those form a complete narrative? Or are they ad-hoc leftovers from the last few bugs?

Look for reflexivity

Are identifiers being dynamically generated? If so, that means you won’t find them by searching the source code — you’ll have to think at a different level to understand parts of what’s going on.

Is there eval? Metaprogramming? New function creation?

func.toString() is your friend! You can print out the source of a callback argument and see what it looks like, you can insert all kinds of debugging to see what things do.

Look at lifetimes

The lifetime of variables is particularly good for figuring out how something is built (and how well it’s built). Look for who or what initializes a variable or property of an object. Look for when it changes, and how that relates to the flow or scope of the process that does it. Look for who changes it, and see how related they are to the part you’re reading.

Look to see if that information is also somewhere else in the system at the same time. If it is, look to see if it can ever be inconsistent, where two parts of the system disagree on what that value is if you were to compare them.

Somewhere, someone typed the value you see into a keyboard, generated it from a random number generator, or computed it and saved it.

Somewhere else, some time else, that value will affect some human or humans. Who are these people?

What or who chooses who they are? Is that value ever going to change? Who changes it?

Maybe it’s a ‘name’ field typed into a form, then saved in a database, then displayed to the user. Stored for a long time, and it’s a value that can be inconsistent with other state — the user can change their name, or use a different one in a new context.

Look for hidden state machines

Sometimes boolean variables get used together as a decomposed state machine

Maybe there’s a process with variables like this:

1
2
var isReadied = false;
var isFinished = false;

The variables isReadied and isFinished might show a state machine like so:

START -> READY -> FINISHED

If you were to lay out how those variables relate to the state of the process, you might find this:

1
2
3
4
5
6
isReadied | isFinished | state
----------|------------|------------
false | false | START
false | true | invalid
true | false | READY
true | true | FINISHED

Note that they can also express the state !isReadied && isFinished — which might be an interesting source of bugs, if something can end up at the finished state without first being ready.

Look for composition and inheritance Is this made of parts I can recognize? Do those parts have names?

Look for common operations

map, transforming a list of values into a different list of values.

reduce, taking a list of values and giving a single value. Even joining an array of strings with commas to make a string is a ‘reduce’ operation.

cross-join, where two lists are compared, possibly pairwise, or some variation on that.

It’s time to go read some programs and libraries.

Enjoy!

Off to New Zealand

As I write this, I’m on the airplane from Boston to San Francisco, for the first of a three-leg trip to Christchurch, New Zealand for a Oneshot Nodeconf. I’m giving a talk on How To Read Source Code, which I’ve been meaning to make into a blog post for a long time, and now I’ve got my slides as a source of material to edit. I’ll probably start doing that on the trip home, after my tweaks and changes settle down.

I really like the Oneshot Nodeconf format: there is only one talk going on at a time, so there’s no competition for whose talk to go to. They’re usually a bit more curated than grab bag events, though they usually have a pretty diverse set of talks. I think knowing that everyone will be listening to their talk makes speakers put a little extra effort into being engaging.

Out the window are the Mondrian patterns of the California Central Valley, all square fields and section roads. Fifteen minutes to go!

Why the quality of teaching programming is so bad

A friend asked why her R statistics programming course on a MOOC was so terrible. She said 90% of the information on the quizzes was in the lecture, but the other 10%? Left for you to discover on your own.

Welcome to the problems I am struggling with. I am now a programming teacher, in most ways that matter. My newest job is about half research and half teaching. What you’re finding is completely the norm, and in fact I’d say 90% is pretty good. Sad facts.

The terrifying status quo is that we have a sixty year old field, one that started as self teaching during the early years, some very smart mathematicians and electrical engineers ended up figuring out how it can and should work, but the early perception was that designing programs was the hard part, conceiving of the math to represent them, and the actual programming the math into the computer was a technician’s job. (Notably, programmers were usually women. Program designers and system architects were usually men. This turns out to be relevant.)

As the field started to grow, programming started to be recognized as requiring the bulk of the problem solving skills, since efficiently encoding mathematics, where a symbol might mean “with all values from zero to infinity” into a computer with only thousands of words of memory took clever reworking of problems. The early work was largely uncredited, mere “entering the program into the computer”.

In the late 70s and into the 80s there was a land grab for the prestige of being a programmer. A new labor category of “ software engineer” was created, a professional engineering job, not the mere technician of being a programmer. Women were excluded from programming, sometimes deliberately by male programmers, sometimes as a matter of practice by engineering schools.

With this shift, programming became a field where training on the job, expecting no familiarity to begin and a few established training programs was replaced by engineering school, and assuming that the discipline is a field of either mathematics or of electrical engineering, and programming courses became upper division electives for engineers working largely in theory. All of this is counter to the people (particularly Margaret Hamilton) who started trying to make software engineering a discipline, but the gestalt of the industry has definitely gone away from teaching being valued.

The net effect of that shift is that the pedagogy of teaching programming was interrupted.

A few training programs remained, but usually tied to industry, and particular companies. The industry balkanized significantly in this period, so IBM would teach IBM programming, and Oracle would teach Oracle programming. The abstract skills of programming are highly portable between languages and fields, but at the raw syntax of a given programming language, the details matter.

Now, another relevant thing is that computers have been sustaining a tremendous pace of development for these sixty years. With a roughly doubling in computation of a chip every 18 months, there have been significant periods where practices would be introduced and thrown away and replaced much faster than the cycle of getting a student through college and in to an adjunct professor’s seat. What they were taught as an entry level student is no longer used, or is wrong in some way if they go to teach that knowledge by the time they’re in a position to assist a teacher or teach themselves.

Both of these have caused most programming teaching to avoid specifics and to only teach the most abstract portions, the parts that will have a longer shelf-life than the details, and to avoid being entrenched in only one part of the industry.

Some schools are finally climbing their way out of this — MIT now teaches Java, an industrial rather than academic language instead of the prior Scheme language, and some European software shops are starting to use Haskell, which started as an academic language, so the crossover is finally happening, but it’s a slow process.

It’s all screwed up. Specifics of systems are needed to actually learn and build things, but the academic process is largely in abstract terms, and bridging that gap is difficult. On top of that, there’s the notion that some people are inherently good at programming, probably derived from similar thoughts about math, so there’s a certain impatience for explaining of, and arrogant derision for people who don’t know the details.

So what’s someone to do?

At this moment, programming specifics are usually peer-taught, so working with people who’ve worked with the specific system and can advise about the syntax and specifics is important. Even in industry, this is recognized, if informally by the practice of ‘pair programming’. Seek classes that get the details out, not just the theory. It will be a mixed bag, but there are good classes out there — just know that ‘good teaching’ of programming is not something systematically understood, and not universally valued.

Creating just online social spaces

The last two months have seen two Slack chats start to support marginalized groups in the technology field, LGBTQ* Technology and Women in Technology, and we’ve had a lot of discussions about how to run the spaces effectively, not just being a place for those who it says on the tin, but to support, encourage and not be terrible to people who are marginalized in other ways than the one the particular group is trying to represent.

This is a sort of how-to guide for creating a social Slack that is inclusive and just, and a lot of of this will apply to other styles and mediums for interaction.

The problem begins thus: How do you keep a Slack started by a white gay cisgender man from reflecting only that as a core group? How do you keep a women in technology chat from being run entirely by white women of (relative) affluence afforded by tech industry positions, leaving women of color, trans women, people with disabilities out in the cold?

Making just social spaces is not a one time structural setup, though things like a good Code of Conduct is an important starting place, and there are difficult balances to strike.

Make sure there is sufficient representation. Social spaces grow from their seed members, and as it’s been studied, people’s social networks tend to be racially and genderwise insular; White members beget more white members; men bring more men, especially in technology as we’ve found. If a space is insufficiently representative of the diversity of experiences that should be there, people will leave, having seen yet another space that isn’t “for” them. So, too, power structures reflect the initial or core body of a social group, and a social group will tend to reflect the demographics of those in positions of power, creating a feedback cycle that will be hard to break without a lot of effort. Seed your network as broadly as you can, and put people without homogenous backgrounds in power.

Empower a broad group. A few admins can’t guide and create the shape of the space alone, so empower users to make positive change themselves.

Plan for timezones. If your chat starts off with US users, you will find that they will dominate the space during US waking hours. You may find an off-peak group in Europe, with an almost entirely separate culture. Bridging the gap with admins in other timezones to help consistently guide the shape of the group can be helpful.

Your users will have reactions to media posted. In particular, seizure disorders can be triggered by flashing animated GIFs. Building an awareness into your social space early can help make sure these are not posted or restricted to certain channels. Likewise, explicit imagery, upsetting news and articles can be marked or restricted, even without banning it entirely.

Plan for how to resolve conflicts. While outright malicious violation of a Code of Conduct can be solved by ejecting members, most cases of conflict are more nebulous, or not so extreme nor malicious that a first offense should involve removal from the space. Slack in particular has let the LGBTQ* Tech group practice a group form of conflict resolution. We created a #couldhavegonebetter channel. When a conversation strays off the rails, into vindictive, oppressive by a member of a relatively privileged group, or evangelizing views that make others uncomfortable, a strategy that has worked well is to end the conversation with “That #couldhavegonebetter”, force-invite the users involved into the channel, and start with a careful breakdown of how the discussion turned problematic. This gives a place to discuss that isn’t occupying the main space; those who care about conflict resolution can join the channel. It’s not super private, but it’s equivalent of taking someone aside in the hallway at a conference rather than calling them out in front of an auditorium full of their peers. De-escalation works wonderfully.

Keep meta-discussion from dominating all spaces. It’s a human tendency to navel-gaze, doubly so in a social space, where the intent of the members shapes the future of the space. That said, it can dominate discussion quickly, and so letting meta-discussion happen in channels separate from the thing it’s discussing can keep the original purpose of channels intact.

Allow the creation of exclusive spaces. Much of the time, especially socially, marginalized people need a place that isn’t dominated or doesn’t have the group who talks over them most: people of color need to escape white people, trans people need to escape cisgender people, people outside the US need space to be away from American-centric culture and assumptions, and not-men need to be able to have space that is not dominated by men. It has ended up being the least problematic to allow the creation of spaces that are exclusive of the dominant group, just to give breathing room. It feels weird, but like a slack focused on a marginalized group as a whole, sometimes even breaking things down further lets those at the intersection of multiple systems of oppression lighten the load a bit.

A chat system with a systemwide identity has different moderation needs than one that does not. A problem found on IRC is that channels are themselves the unit of social space allocation. There is no related space that is more or less intimate than the main group, and so conversations can’t be taken elsewhere, and channelization balkanizes the user group. With Slack, this is not true. Channels are cheap to create, and conversations can flow between channels thanks to hyperlinks.

Allow people to opt out generally, and in to uncomfortable or demanding situations. A great number of problems can be avoided by making it possible to opt out without major repercussions. Avoid lots of conversation in the must-be-present #general channel, howver it’s been renamed. (#announcements in one place, #meta in another). Default channels, auto-joined by new users should be kept accessible. Work-topical channels should be kept not-explicit, non-violent spaces, so they are broadly accessible. Leave explicit imagery in its own channels, let talk about the ills of the world be avoided. And keep the volume low in places people can’t leave if they’ll be in the Slack during their workday.

Good luck, and happy Slacking!

Why MVC doesn't fit the web

A common set of questions that come up on IRC around node web services revolve around how to do MVC “right” using tools like express.

The short answer: Don’t.

A little history. “MVC” is an abbreviation for “Model, View, Controller”. It’s a particular way to break up the responsibilities of parts of a graphical user interface application. One of the prototypical examples is a CAD application: models are the objects being drawn, in the abstract: models of mechanical parts, architectural elevations, whatever the subject of the particular application and use is. The “Views” are windows, rendering a particular view of that object. There might be several views of a three-dimensional part from different angles while the user is working. What’s left is the controller, which is a central place to collect actions the user is performing: key input, the mouse clicks, commands entered.

The responsibility goes something like “controller updates model, model signals that it’s been updated, view re-renders”.

This leaves the model relatively unencumbered by the design of whatever system it’s being displayed on, and lets the part of the software revolving around the concepts the model involves stay relatively pure in that domain. Measurements of parts in millimeters, not pixels; cylinders and cogs, rather than lines and z-buffers for display.

The View stays unidirectional: it gets the signal to update, it reads the state from the model and displays the updated view.

The controller even is pretty disciplined and takes input and makes it into definite commands and updates to the models.

Now if you’re wondering how this fits into a web server, you’re probably wondering the same thing I wondered for a long time. The pattern doesn’t fit.

On the web, we end up with a pipeline something like “Browser sends request to server, server picks a handler, handler reads request and does actions, result of those actions is presented to a template or presentation layer, which transforms it into something that can be sent, which goes out as a response to the browser.”

1
request -> handler -> presentation -> response

It still makes sense to separate out the meat of the application from the specifics of how it’s being displayed and interfaced to the world, often, especially if the application manipulates objects that are distinctly separate. A example might be that an accounts ledger makes no sense to bind the web portions to the data model particularly tightly. That same ledger might be used to generate emails, to generate print-outs, and later to generate reports in a completely different system. The concept of a “model” or a “business domain logic” layer to an application makes sense:

1
2
3
4
5
request -> handler -> presentation -> response
^
|
v
business logic

But some time in the mid-2000s, someone thought to try to shoehorn the MVC concept into this pipeline, and did so by renaming these components:

1
request -> controller -> model -> view -> response

And this is why we end up with relatively well-defined models, since that makes sense, and ‘views’ are a less-descriptive name for templating and presentation logic. What’s left ends up being called a ‘controller’ and we start a lot of arguments about whether a given bit of logic belongs there or in the model.

So in express, let’s refer to models and domain logic, to handlers and to templates. We’ll have an easier time of it.

Handlers accept web-shaped data: query strings and post data, and shape them into something the business logic can deal with. When the business logic emits something we should display, that same handler can pass it off to templates, or in the case of data being rendered in the browser by a client there, serialized directly as json and sent off as the response. Let the business logic know little about the web, unless its concern is the web as in a content management sytem. Let our handlers adapt the HTTP interface to the business logic, and the responses out to our presentation layer, even if that’s as simple as filling in values in a template.

We’ll all be a lot happier if MVC keeps its meaning as a paradigm for breaking up responsibility within a GUI.

Design Ethos

I just realized that my entire software design ethos is ‘power to the people’.

I started to argue over whether an interface (one that modifies some mutable object, however unfortunate it is) should no-op, throw an exception, or warn when it’s already been done once and runs again.

To no-op is to say “we know better than you and will do what we consider the Right Thing”.

To throw an exception is to say “we know better than you and will make you do what we consider the Right Thing”.

To warn the developer using the module is to say “we have more experience here, and say what we think … but your call. Go for it!”

A social software toolbox

Rate Limiting can be implemented as a way to deter high-cost actions, whether the cost of technical details like API calls, or socially expensive like posting comments, where one or two is easy to keep up with, but many can be a burden on the receiver. Well chosen, they can be invisible to users who are not actively being malicious; poorly chosen or bound to technical rather than social concerns, they can be arbitrary and frustrating limits.

Tarpitting is adding rate limits that are just not satisfiable to a malicious user, frustrating them into giving up.

Delay can be a mild form of rate limiting that makes users who are overwhelming the system or other people experience the system as slower and less pleasant to use.

Blocking most often makes users invisible to each other. In the case of public postings, it usually means that one user can’t share the other’s postings or otherwise interact with them, though they can see posts.

Muting simply ignores an undesirable user’s posts.

It’s interesting to note that more marginalized people prefer to block, and less marginalized prefer muting. There are a lot of subtle dynamics in these interactions. Given a private backchannel that doesn’t respect blocking, blocking a user will cause a harasser to escalate privately.

Penalty box is a timed block, shadowban or teergrube that expires, giving users time to cool down. When under a user’s control, can help separate bad actor blocking from merely not wanting to deal with someone at the current time.

Private backchannel can allow someone who wishes to connect a way to do so without being public, but can also allow a harasser to privately act poorly while maintaining public good standing. Direct messages are Twitter’s backchannel; replies to author only are a mailing list’s backchannel.

Privacy groups The permission model of Livejournal, posts can be restricted to a single privacy group (a list of users) and only viewed or shared within that group.

Friending is initiating a symmetrical relationship, complete only when confirmed by the other party.

Open follow is initiating a one-way relationship, usually expressing interest by the follower in the followee.

Approved follow is initiating a one-way relationship, as in open follow, but requiring the followee to approve the action, as in friending.

Private account is disabling public visibility of the posts in an account, usually making them vet followers as in approved follow.

Upvote/Downvote are a popular way to weed out chaff from a conversation, where offtopic, rude or poorly written comments are downvoted by a community, and popular, funny, or insightful comments are upvoted. It can be problematic when the culture of a community itself reinforces poor choices, and it’s subject to gaming via social campaigns.

Reflection is the act of restating a comment when replying to it. Requiring a commenter to first restate and reflect what the original poster said before posting their reply is an interesting way to try to suppress flame wars of misunderstanding, and also increase the expense of malicious comments. I know of no system that has ever implemented this, but it was proposed by @RebeccaDotOrg and I think it’s a fantastic idea for debate where actual exploration or consensus on a hot issue is interesting.

Shadowbanning is redirecting a malicious user to a dummy version of the site to interact with where their actions will never be seen by real human beings. Often combined with tarpitting or ratelimiting.

Sentiment analysis is a way to automatically try to ascertain whether a comment is positive or negative, or whether it’s inflammatory, and whether to trigger some of the other countermeasures.

Subtweet is commenting in a chronologically related but not directly connected conversation. A side commentary, usually among a sub- or in-group.

Trackback is automated notification to an original post or hosting service when a reply or mention is generated on another site.

Flat commenting is the form typically used by forum software, where posts are chronological or reverse chronological below a topic post.

Threaded commenting is used in some environments like Reddit, Metafilter, Live Journal and some email clients where each message is shown attached to the one it replies to, giving subtrees that often form entirely different topics.

Weakly threaded commenting Threading only shown for conversation entries from followers. Often implemented client-side, given an incomplete reply graph.

Real identity can cause some commenters to behave, particularly in contexts associated with their job.

Pseudonymous identity can give stability to conversations over time, showing that the same actors are present in conversations. If easy to create more identities, can yield sockpuppeting.

Anonymous identity can create a culture of open debate where identity politics are less prominent, but can let some people play their own devil’s advocate and can launch completely unaccountable attacks.

Cryptographic identity are interesting in that there is no central authority, and they can often not be revoked (there’s no way to ban an identity systemically without cooperation). Cryptographic names are often not human-memorable, thanks to the constraints of Zooko’s Triangle. It’s possible to work around, but the systems for doing so are cumbersome in their own right.

Invites are often used to make sure that the social group grows from a known seed; because social networks are often strictly divided by race and gender, the often serves to make the group homogenous over certain traits, despite not having selected for these traits specifically. It can also rate-limit the growth of any one group, given enough seeding of minority or otherwise oppressed groups to let a more diverse pattern form, if seeding is chosen carefully.

Invite trees are a pattern where each user can invite some other users, but is in some way ‘responsible’ for their behavior, which limits the possibility that invites are sold openly, and can in some cases keep out certain surveiling users.

I’m sure there are a great number of patterns I’ve missed, but cataloguing these and calling out the differences may help make us more aware of the tools we have at our disposal in creating social networks.

Why is it so hard to evolve a programming language?

Parsers.

We use weak parsing algorithms — often hand-written left-leaning recursive descent parsers. Sometimes PEGs. Usually with a lexing layer that treats keywords specially, annotating them as a particular part of speech without that being a function of the grammar, but the words themselves.

This makes writing a parser easy, particularly for those hand-written parsers. Keywords are also a major reason we can’t evolve languages: adding new words breaks old programs that were already using them.

The alternative is to push identification of keywords into the grammar, and out of the lexer. This means that part of speech for a word can be determined by where it’s used. This allows some weird language, but it keeps things working well. Imagine javascript letting you have var var =. It’s not ambiguous, since a keyword can’t appear as a variable name, positionally. The first var can’t be known whether it’s a keyword or variable name without some lookahead, though: var = would be a variable name and var foo would be a keyword.

This usually means using better parsers. Hand written parsers could maintain a couple tokens buffered state, allowing an unshift or two to put tokens back when a phrase doesn’t match; generated parsers can do better and use GLR, and a fully dynamic parser working off of the grammar as a data structure can use Earley’s algorithm.

These are problematic for PEGs though. They won’t backtrack and figure out which interpretation is correct. Once a PEG has chosen a part of speech for a word, it sticks. That’s the rationale behind its ordered choice operator: one must have clear precedence. It’s in essence an implicit way to mark which part of speech something is in a grammar.

Backward-incompatible changes

It’s always tempting to get a ‘clean break’ on a language; misfeatures build up as we evolve it. This is the biggest disservice we can do our users: a clean break breaks every program they have ever written. It’s a new language, and you’re starting fresh.

Ways forward

Pragmas. "use strict" being the one Javascript has. They’re ugly, they don’t scale that well, so they have to be kept to a minimum. Version selection form mutually exclusive pragmas. This is what Netscape and Mozilla did to opt in to new features: <script language='javascript1.8'>. The downside here is that versioning is coarse, and doesn’t let you mix and match features. Scoping "use strict" to the function in ES5 was smart, in that it allows us to use the lexical scope as a place where the language changes too.

The complexity with "use strict" is that it changes things more than lexically: Functions declared in strict mode behave differently, and if you’re clever, you can observe this from the outside, as a caller, and that’s a problem for backward compatibility.

Support multiple sub-languages. In a parser that can support combining grammars (Earley’s algorithm and combinator parsers for pure LL languages in particular are good at this, though PEGs are not). If someone elects a different language within a region of the program, this is possible. Language features can be left as orthogonal layers. How one would express that intent is unexplored, though. Too few people use the tools that would allow this.

Versions may really be the best path forward. Modular software can be composed out of multiple files, and with javascript in the browser in particular, we’ll have to devise other methods; transport of unparsed script is already complex.

We should separate the parser from the semantics of a language: Let there be one, two, even ten versions of the syntax available, and push down to a more easily versioned (or not at all) semantic layer. This is where Python fell down without needing to. The old cruft could have been maintained and reformed in terms of the new concepts from Python3.

Confusion in Budapest

Today, in ‘European politics are inscrutable unless you’re there’, I discovered that while Hungary is a European Union member, they don’t use the Euro, but still on the Forint, with a variable exchange rate. There’s resistance to fully adopting the currency here.

I changed money to Euros in Newark since I had time to kill, but that left me in Budapest when I arrived without money that small vendors will accept. Doubly so the bus, who requires exact fare.

I ended up being waved onto the bus, however much a mistake that was, because it was so full. Disaffected drivers are a particular frustration on bus routes for me.

I ended up at the exchange point to get on the Metro (K-P P+R), but without a valid ticket to exchange. I ended up having to wander around to figure out how to get away from the Metro station; it turns out it’s attached to a shopping mall. I withdrew 15.000 Ft., considered trying to get some food at Tesco, and decided against it and went to the train.

Tickets are confusing … I misvalidated my first one, destroying it. After some I-don’t-speak-Hungarian-you-don’t-speak-English with the ticket attendant, she showed me how to validate a ticket and I finally got on the Metro.

Got off downtown, realized I was at the wrong Hilton; of course there are two. I’m at the less convenient but much more beautiful one one the hill across the Danube. At least taxis are affordable here.

I didn’t notice when I booked this hotel that it’s at the top of a small mountain. It’s not a long walk to the conference, but it’s a steep one.

On my way to Budapest

0100 AT

Airports are already whitewashed by cost, but international travel even more so: Probably 90% white people in Newark’s Terminal B that I saw, both passenger and workers alike. It’s disturbing to see such a strong filter, and I wonder what pressures are selecting this way. Is it hiring for dual language, and in a wing where most flights are to Europe, and so select for European languages? Or is there some more insidious bias?

0300 UTC

I’m flying over the atlantic right now, three hours from Vienna. It’s night, by any stretch of the imagination — early in Vienna, rather late in Boston. I spent all day in airports, mostly waiting in Newark since my midmorning flight in didn’t really come that near my early evening flight out.

The trend to denying passengers checked luggage did the usual damage on this flight: Slow boarding, people cramming bags that did not fit into overhead compartments. I wonder if European flights have different size carryon; mine was one half inch too big to fit in most of the bins, and so had to be put in sideways, taking more space than neccesary. This feels like a classic case of engineers rounding measurements to a convenient number in their native unit, saying ‘close enough’. 33 cm totally equals 12 inches, right? Close enough.

I’m sitting with two delightful women, on their way to Iran; at least one’s a journalist, and I’ve not asked for more detail from the other. They’re kind and fun, and shared chocolate with me. I hope they make their connection — they’ve ten minutes between, and a whole day’s wait if they miss it. We bonded over the difficulty of loading bags in the bins; theirs misfit the same way mine did, and even my height and ability to use brute force to close things didn’t do the job.

I can’t sleep, since the flight’s a little bumpy, and the staff keep nudging my elbow in the narrow aisles. A 777-200 is a lot nicer than most planes I’ve been on recently, but it’s still arranged for sheer number than it is for comfort.

I watched two films.

Since Bailey couldn’t come with me, I got to see ‘Guardians of the Galaxy’, which was fun, but I’m starting to get frustrated with the arbitrary plots of so many movies. I feel like in the past, directors at least tried to satisfy those of us whose sense of disbelief, while suspended, still works. Lately they do not. Every device satisfies plot, not internal consistency, and every failure is arbitrary to the same purpose. It’s the kind of lazy storytelling that leads to killing characters for emotional impact, rather than driving situations where tough choices have to be made. The first bad guy was black, and the green woman had echoes of Star Trek’s Orion slave girls. The criticism of her character development are true, she’s almost all prop to the men in the film, even if she does kick ass initially. There’s even the clicheéd rescue scene toward the end.

Second came ‘Lucy’. It’s as if ‘Kill Bill’ merged with ‘2001: A Space Oddyssey’ and ‘I, Robot’. Bad science, but at least somewhat internally consistent. Bizzarely philosophical even while there’s wonton killing on screen. I can add it to my long, long list of movies that make me say ‘huh’ at the end. I do wish there was a convention for ‘humans have potiential’ other than ‘humans only use 10% of their brain’. It’s so trite at this point that it makes me angry.

0500 UTC

Now we’re over the English Channel, heading across France. my brain is trying to comprehend the path we’re taking given the Mercator projection of the map they display it on and the 11,000 m altitude. One part of me wants to round that down to ‘well, just barely off the surface’, and the rest thinks it’s unfathomably high, and backed up by the -50 °C outside the aircraft.

At that altitude, pressure should be low enough that sustaining large mammal life is almost impossible without hibernation level metabolic change. The temperature, too, would kill within minutes. I wonder what the air pressure is inside the cabin. I’ve never found a good physical indicator using my body, but my ears have popped continuously for the last six hours.

0600 CET

I can barely see the sunrise, I think over the Rhein plain, from my seat since I’ve an aisle. It’s pretty, a dull orange and deep blue, separated by an even deeper blue layer of clouds, slowly lightening.

I’m not sure whether jetlag will hit me or not — I’ve been up for 20 hours or so, but feel like it’s morning. I hope that this bodes well. We’ll see if I make it to tonight. I think there’s a speaker’s dinner, or some other gathering leading up to the conference tomorrow. I should check, but there’s no internet connection in-flight, and we’ll see what happens on that front when I get to Budapest. Maybe I can nap on the last flight and arrive truly refreshed. We’ll see if I get a window to lean against. Chances are aisle or middle though. If a Dash 8 has a middle.

0730 CET

I’m most worried now about whether I can get a SIM card and enough data service to be useful while I’m at the conference. I suspect it’ll be fine, but likely a little annoying.

Nodevember 2014 - Sunday

@bejonbee talking about React.

He works for an interesting group of people — not the usual consultancy, but a wealth-management and self-sufficiency group, doing education. Interesting premise.

Mostly a 101 to react, but nice to see someone more familiar tie some things together.

The implications of the DOM-diffing behavior are really interesting, in that modifications made outside of React are preserved, not overwritten — React really does play nice with others.

JSX is interestingly implemented; solid work by people who really understand parsers, but they’re somewhat simplistic about the lexing, so that class is a reserved word, meaning HTML class= had to be renamed className=.

@funkatron is giving a talk on “Open Sourcing Mental Illness”.

His talk’s been recorded 14 times(!) and he has copies up at funkatron.com.

Comparing eye disease — needing corrective lenses — to mental illness. Awesome! “How many people here have just been told you need to ‘squint harder’” .. no hands.

“how many of you would feel comfortable talking about a coworker you knew pretty well about having cancer?” Most hands.

“How many would feel comfortable with talking about your mental health?” Maybe 1 in 5.

Moderate depression`has a similar level of disability to MS or severe asthma.

Severe depression has a similar level of disability to quadrapeligia.

“You are so brave and quiet; I forgot that you were suffering”
–Ernest Hemmingway

Watching @derickbailey‘s talk on development environments and getting out of IDEs, looking for advice to give to developers at PayPal.

I just realized that Grunt looks a lot more amazing if you’re coming from a heavy IDE with lots of features but no flexibility. It’s amazing what perspective looks like!

And now to @hunterloftis “We are all game developers”

He built a game for a major music artist in the UK in three weeks, using software rendering. Great art and integrating the music.

Now 1.7 billion WebGL devices were shipped last year. It’s available on IOS 8!

“We avoided a lot of work by avoiding state” — since most rendering is built with pure functions from a tiny, immutable state, lots of things like animation speed came out naturally. Then add websockets and the state from one user’s view controls a renderer on the server. Super clever.

requestAnimationFrame can drop frames, so time has to be calculated and perhaps multiple simulation ticks (to assign position and state) to keep time constant and not dependent on the computer’s speed. He points out that this affects testability, and rendering and simulation have to be decoupled.

Simulate faster than rendering: otherwise, tearing and sync problems.

Made the audience laugh with an awesome bug, trying to simulate rigid body physics, a simple box, which in trying to make it behave right flaps around the screen like a bird, squishing and flopping before it pops into final shape. The take=away though is that if physics is written deterministically and not depending on the render loop, the bug is repeatable — and it’s possible to know the bug was fixed since the simulation is deterministic.

And techniques for controlling state and using immutable objects apply greatly to DOM rendering and apps too. React uses determinism to great effect.

talks I missed

I’m bummed that I’m missing @thlorenz‘ talk on heap dumps for mere mortals, but I’m making a note to have good conversations with him after the fact. (He’s already got his slides up!)

I heard that @cdlorenz’ “Runtime Funtimes”

@nicholaswyoung‘s (Original Machine) talk on microservices.

“We learn more from things going wrong than things going right”

Divorced the CMS domain from the podcast feed domain, and separated out the web player software.

“When we release a new show, we get 10,000 requests to download the show in the first few seconds. Ruby wasn’t the utopia we thought it was.”

“I build a monolith because that’s what Rails conditions you to do. The admin dashboard, the analytics, the feed generation all in one app. You put your controllers in your controllers directory and your views in the views directory, and you end up with a monolith.”

“Our media services would hold requests and it would take users 4 seconds to load our application”.

“I didn’t initially know how to made node work. First thing I did was write a monolith. I thought the runtime would save me. I’m smart

Seeing the core broken up into such isolated modules, you get the idea that breaking things up is a good idea.

“It’s an application. I guess you could call that a service

Nodevember 2014 - Saturday

Nodevember 2014 kicked off this morning in Nashville after a super fun Nodebots meetup last night.

@elizabrock‘s keynote was a fantastic review of where we came from and who we are as an industry: Not just computer scientists, but computer scientists; not mathematicians, but often doing mathematics. Not just engineers but doing engineering. Not all artists, but doing art.

We’re training the fourth generation of programmers now: The women who programmed the early computers during World War II could be our children’s great grandparents.

I missed @jeffbski’s talk on hardening Node.JS for the Enterprise, but the slides look great and I heard great stuff. Also fighter jets, befitting a talk from someone who was USAF!

Good talk from @ifandelse on ES6. The future is now, for sure. Coming up fast.

@mrspeaker gave a great, fun, funny, reference-filled talk on Gonzo Game Development. Lots of great quotes, and talk about the line between engineering and art.

@katiek2 gave a great intro to Nodebots, and what’s needed to run a good meetup. It’s tempting to do one in Boston / Somerville. Totally @rwaldron‘s turf, and would probably be awesome.

My own talk went well, though my voice gave out part way through and I ran fast. I wasn’t planning on questions, but I guess if you engage your audience, they’ll ask ‘em anyway.

A good intro to couchdb from @commadelimited

And now, “Make art, not apps” by @thisisjohnbrown — simple algorithms! Relating touch to display. Looking up file formats so you can aim glitchiness at interesting places when you corrupt data. Simple trials of “let’s see where the code takes us” become best-of-show art pieces.

He made a “plinko” board (like pachinko), and wired it up to a projector and board, and used it to trigger particle animations. The demo gets “Aaaaahs” from the crowd. Super simple effect but totally wow.

He showed off Iannis Xenakis’ music from the 1950s generated from gas molecule interactions, Frieder Nake’s art created with markov chains. People have been doing wonderful art with code and algorithms for a long time!

And ending with a demo of the Neurosky device and Neuronal-Synchorony library together reading brainwaves and generating output, both audio and visual — imagine that being combined into a multi-person dance party!

Homework: Do your own art. Lots of options!

  • Uncontext: Structured data source without the context or rules for how they’re generated, but a source of data to do art with.
  • p5.js

Walter Benjamin’s 1936 essay on mechanical reproduction and art and what has done to art, if you change ‘mechanical’ to ‘digital’ and you have a manifesto for creative coding.

Time for a party!

Not a moment too soon

It wasn’t a moment too soon — and in fact a few too late — that I moved my site from Wordpress to Hexo. The other two dozen — not just my friend’s blogs — Wordpress sites on the server — with versions from 3.8 to 4.1 — were broken into and scripts created that would send mail. Some interesting features of the hack though!

  • They installed PHP with innocuous-sounding files like gallery.php inside of plugins and themes for Wordpress.
  • They installed a .so file, loaded it into the /usr/bin/host program with a dynamic loader trick, then deleted the .so so it’d be hard to find. This created a daemon used to send junk mail, and quite efficiently too.
  • Having PHP record what URL was posted to when sending mail is the best thing ever for tracking this down.
  • lsof is great for verifying that things are shut down.
  • They wrote to every directory that they had privilege to that was web accessible. Very adept hack.

Ugh.

Blog Migration

I just moved my blog over from a Wordpress installation to Hexo in a fit of frustration after five friends blogs were broken into and used to spam via a hole, apparently, in Wordpress’ Jetpack. The danger of leaving complex software unpatched for more than a day is becoming impossible, and I don’t use most of the features of Wordpress anyway, given that I’ve increasingly had an allergy to comments and most other more dynamic features, and I author in Markdown anyway. Being able to do this tidily in vim makes me happier than editing in a web browser anyway.

I chose Hexo because it had a working migrator to import a dump from Wordpress; no other reason, really, but its design works well enough (even if it is slow to generate the static files given my nearly 1500 posts). URLs were preserved with little hackery, too, so I didn’t break the web in the process.

I still want something better: I’d be happy without pagination to avoid rebuilding a 1500-entry latest-first archive every time I add a post; style files don’t seem to get updated properly (that is probably a more trivial bug that I could fix), and something that’s more directly in tune with the dependencies between the source files and the generated pages would be delightful. Maybe I need to make something with Broccoli or even just make(1) or tup.

Telcopunk

So we’ve had steampunk and dieselpunk, cyberpunk and seapunk.

My I’m going to call my aesthetic ‘Telcopunk’.

I favor practicality.

I believe in universal service and universal access.

Utilitarianism rules.

Research is important.

Unions are good.

Work locally. Think globally.

Distance is expensive.

Connecting people is important.

Information is and should be a primary concern of industry.

Designs should be made for durability.

An important job is building and maintaining infrastructure.

Privacy — but not security — is a core value, and standards of conduct reflect this.

Jeans. Work boots. Gloves.

Conceive things, then make them.

'How do I get good at programming?', I'm asked.

Read. Write. Publish. Repeat.

And in general, people’s opinions are meaningless without data to back them up. So ignore the haters.

Ignore the people saying you’re doing it wrong unless your job depends on it or they have good reasons.

People will tell you “javascript will die” or “ruby is just a fad”

Ignore the haters.

But also ignore the haters who say “java is stupid.”

And ignore the haters who say “OO is wrong”

And ignore the ones who say “OO is the only way” Or “OO is the best way” too.

But listen to the people who say “have you considered a different approach?”. Those are the good ones.

Strong suggestions for structurally combatting online harassment

Craig Newmark asked for suggestions and here’s some things I came up with:

  • Create block functions that actually work and completely block all interaction with a user.
  • Create a mute function that doesn’t get tangled in block.
  • Respond to abuse reports, generating at minimum an inter-user block, but that when they actually involve any kind of escalation by the abuser, a block of that user from the service (or other highly quarantining action).
  • Encourage use of pseudonyms rather than complete anonymity, if only to encourage a stable handle to block by.
  • Spam-fighting-like statistical models to detect outlier behavior — repeated first contacts by someone who’s been reported as harassing is one particularly significant sign. Being proactive and confirming with the harassed user might even make sense. “Is @username bothering you?”
  • Allow communities to segment when possible, rather than encouraging all users to share one single graph.
  • At least three-level privacy controls per account: Public, initial contacts restricted to friends, and all contact restricted to friends.
  • Create transparent policies and processes, so we can know how effective the service will be in supporting us if harassed, rather than shouting into the void, wondering if anyone actually reads these reports. If the policies or processes change, say something!
  • Do use decoy selections in report abuse forms, but keep it simple: “This is annoying” vs “this is dangerous” can be differentiated, and the decisions about how to handle those should be different.
  • Don’t patronize the people you’re trying to protect. Leave choices in the hands of those needing protection when it’s possible. For tools for protection that have downsides (social cost, monetary cost, opportunity cost), let those needing protection opt in or opt out. If the tools are independent of each other, let them be chosen à la carte.

And a rule of thumb:

If you spend less time fighting harassment than you do fighting spam, your priorities are wrong. If you take spam seriously and don’t take harassment seriously, you’re making it worse.

An unofficial mission statement for the #node.js IRC channel

This is the mission statement I seek to uphold when I provide support on the Freenode #node.js channel.

To support users of node.js and their related projects to create a collaborative,
creative, diverse, interested, and inter-generational sustainable culture of
programmers of all skill levels, with room and encouragement to grow.

One of the great criticisms of IRC channels for software support is that they’re often the blind leading the blind. Experts have little direct incentive to jump in with half-formed questions, and it takes some real skill to elicit good questions that can be answered accurately. There’s some incentive for some members to use questions to opine their favorite tools, and to show off clever answers not necessarily in the best interests of the person asking.

The other problem is times of day — American nights and weekends have a lull, and questions asked then are often left to the void. Hard to answer questions — vague and incomplete ones especially — are the easiest to ignore. Let’s do the hard work to encourage good discussion, even among the less carefully asked, hurried questions.

We can do this and be unusual among technical channels. We’ve the critical mass to do it, and we’ve a great culture to start with. Let’s keep it up!

A Tale of Two Webs

originally posted on Medium

There’s a sharp divide between the people who make the Web, all of us, everywhere, and Silicon Valley Tech.

It’s a cultural divide I’ve seen come up again and again and again in discussions of tech culture.

On one side, we have the entitled, white frat-boy mentality of a lot of Silicon Valley start-up companies, with a culture going back years in a cycle of venture capital, equity, buy-out or IPO, repeat; a culture often isolated from failure by the fact that even the less amazing exits are still a solid paycheck. I suggest that this grew out of American industrial culture, the magnates of the nineteenth century turned inward into a mill of people all jockeying to be the next break-out success.

On the balance, we’ve the people who make the Web outside those silos. The lone designer at a traditional media publisher, doing the hard work to adapt paper print styles to the rapid publishing and infinite yet strangely shaped spaces of browser windows. The type designers who’ve now made their way out of lead blocks and work in Bézier curves. The scientist at CERN who realized that if every document had an address, scientific information would form a web of information. They don’t labor in a Tech Industry, they labor in their industries — all of them — connected by a common web.

In media, it appears as one giant “Tech industry”, and perhaps this is bolstered by the fact that a great number of people don’t know what a lot of us do — a software developer and a designer are so much the same job to someone who’s not paying attention to the details.

And yet, on Wednesday, a great many people turned their Twitter avatars purple in support of a family who’s lost a child to cancer. Looking over who they were, something dawned on me: They were some of the best and brightest on the Web. Authors, developers, designers. The people who know and know of @meyerweb are the people who make the Web. This is the Web I care about, have always cared about. It’s the web of caring and sharing, of writing and collaborating. We take care of our own.

In skimming over the people who’ve gone purple, I notice one thing: The bios that list locations say things like “Cleveland, OH”, “Chicago, IL”, and “Cambridge, MA”. “Bethesda, MD”, “Phoenix, AZ”, “Nashville, TN”. “1.3, 103.4”. Their titles are “type designer”, “map maker”, “standards editor”, “librarian”, “archivist”.

And far, far down the list, a few “San Francisco, CA” and “San Jose, CA”, “Software Developer” and “Full-stack engineer”.

2112

You do occasionally visit Boston Public Library, yes?
If not, get on it! You were raised in and on libraries. They are in your blood!

You called me out rightly on that one! I’ve never actually been inside the BPL
— it’s on the Green line, the cantankerous part of the subway — and I just
haven’t been out there. Somerville’s is pretty limited — not nearly as big as
Englewood’s library, and it’s got a selection that’s definitely not aimed at
me.

I just saw the Arlington library night before last, actually, and it’s this big
huge modern building, it reminds me of the Koelbel library we used to go to.
It’s the first one I’ve been so excited to try to go to in a while.

It’s funny that you bring this up right now. I’ve been reading article after
article for the last year, but especially in the last weeks by librarians and
book publishers and authors talking about what the role of libraries are in a
world where it’s relatively easy to get ahold of the actual text anywhere and
anywhen.

There’s a whole argument that libraries are obsolete; a lot of this came out of
the crazy world of the California tech scene, where there’s this huge
Libertarian ‘government is evil, technology will solve all our woes’ thinking,
but that tends to assume that everyone is on average white, male, and upper
middle class. They’ve got a point, though, that for pure access to thought and
information, the Internet has done something unprecedented.

But libraries serve a few other purposes that e-books and the Internet can’t solve.
So many of my queer friends pointed out that that libraries were their refuge as
kids and teenagers, from a world that was pretty intent on being horrible to them.
Often they come from families that were more than borderline abusive, and the
library was their safe place. There’s a whole generation of us for whom that rings
true, and kids coming of age now less often say that — but there’s never been
anything to replace that need for them.

Libraries are one of the few first-class public service, one of the few that
historically has ignored what economic class you’re from and has just provided
a service to everyone. That’s starting to change in some ways — inner city
libraries are starting to think of themselves as intervention points for kids
who won’t have access to reading before school, for poor families who can’t
cross that ‘digital divide’ and get on the Internet, they’re buying computers
and setting up more and more space for non-book-oriented services. They’re
focusing on the poor around them and abandoning the universal service model.

(I read a great quote today — “In Europe, public services are for everyone. In
the US, public services are for the poor who can’t afford a private
alternative” — and libraries are one of the few services where that’s not been
true.)

I’ve never been too keen on the model of librarians-as-authorities to appeal to
for information, but even so, having someone who knows the information out
there and can guide you is super important — it’s the role teachers really
should play, but don’t.

There’s a lot of thoughts on this rattling around in my brain trying to escape
coherently, but nothing’s made it out beyond this yet, and certainly not me
figuring out how I fit into it yet. Libraries are in my blood, but I’m not sure
if the thing I’m after is there, or if it’s something more abstract that I’m
chasing.

Anyway just wish we could be sharing another book together.

I’d like that, a lot. I think that’s one thing that’s been lost in the mostly
fast-paced tech words world is sharing thoughts about a big piece of writing. I
comment on blogs and articles, and discuss on Twitter a lot, but books don’t
have the convenient handles where you can just link to them and highlight
something and say “THIS is what’s right about this”. I want to share some of
those things and it’s not happening as much as it used to. I miss sharing them
with you!

Aria

Recipe: Storm in the Garden

Recipe: Storm in the Garden

Ingredients

  • 10 ml lavender vodka
  • 10 ml orange vodka
  • 10 ml hibiscus vodka
  • 200 ml ginger ale
  • ice

Instructions

  1. Drop the ice in a pint glass, pour in the ginger ale. Add the vodkas layered gently on top, ending with the bright red hibiscus.

Preparation time: 2 minute(s)

Number of servings (yield): 1

My rating 5 stars:  ★★★★★

Having vs. Owning | ps pirro

Sometimes people get confused about the difference between having something and owning it.

“I have an ipod” signals ownership. “I have a dog,” or a child, or a spouse, implies a relationship, a mutuality between sovereigns. Things get messed up for us, and for those with whom we are in relationship, when we confuse the one for the other.

Ownership denotes control. Relationship is wrapped up in reciprocity.

Ownership is unilateral. In relationship, something is always owed to the other. Always.

As a general rule, if a thing is alive — and for the animists among us, this includes pretty much everything — what you have is a relationship. Even if the law says otherwise.

Having vs. Owning | ps pirro.