A social software toolbox

Rate Limiting can be implemented as a way to deter high-cost actions, whether the cost of technical details like API calls, or socially expensive like posting comments, where one or two is easy to keep up with, but many can be a burden on the receiver. Well chosen, they can be invisible to users who are not actively being malicious; poorly chosen or bound to technical rather than social concerns, they can be arbitrary and frustrating limits.

Tarpitting is adding rate limits that are just not satisfiable to a malicious user, frustrating them into giving up.

Delay can be a mild form of rate limiting that makes users who are overwhelming the system or other people experience the system as slower and less pleasant to use.

Blocking most often makes users invisible to each other. In the case of public postings, it usually means that one user can’t share the other’s postings or otherwise interact with them, though they can see posts.

Muting simply ignores an undesirable user’s posts.

It’s interesting to note that more marginalized people prefer to block, and less marginalized prefer muting. There are a lot of subtle dynamics in these interactions. Given a private backchannel that doesn’t respect blocking, blocking a user will cause a harasser to escalate privately.

Private backchannel can allow someone who wishes to connect a way to do so without being public, but can also allow a harasser to privately act poorly while maintaining public good standing. Direct messages are Twitter’s backchannel; replies to author only are a mailing list’s backchannel.

Privacy groups The permission model of Livejournal, posts can be restricted to a single privacy group (a list of users) and only viewed or shared within that group.

Friending is initiating a symmetrical relationship, complete only when confirmed by the other party.

Open follow is initiating a one-way relationship, usually expressing interest by the follower in the followee.

Approved follow is initiating a one-way relationship, as in open follow, but requiring the followee to approve the action, as in friending.

Private account is disabling public visibility of the posts in an account, usually making them vet followers as in approved follow.

Upvote/Downvote are a popular way to weed out chaff from a conversation, where offtopic, rude or poorly written comments are downvoted by a community, and popular, funny, or insightful comments are upvoted. It can be problematic when the culture of a community itself reinforces poor choices, and it’s subject to gaming via social campaigns.

Reflection is the act of restating a comment when replying to it. Requiring a commenter to first restate and reflect what the original poster said before posting their reply is an interesting way to try to suppress flame wars of misunderstanding, and also increase the expense of malicious comments. I know of no system that has ever implemented this, but it was proposed by @RebeccaDotOrg and I think it’s a fantastic idea for debate where actual exploration or consensus on a hot issue is interesting.

Shadowbanning is redirecting a malicious user to a dummy version of the site to interact with where their actions will never be seen by real human beings. Often combined with tarpitting or ratelimiting.

Sentiment analysis is a way to automatically try to ascertain whether a comment is positive or negative, or whether it’s inflammatory, and whether to trigger some of the other countermeasures.

Subtweet is commenting in a chronologically related but not directly connected conversation. A side commentary, usually among a sub- or in-group.

Trackback is automated notification to an original post or hosting service when a reply or mention is generated on another site.

Flat commenting is the form typically used by forum software, where posts are chronological or reverse chronological below a topic post.

Threaded commenting is used in some environments like Reddit, Metafilter, Live Journal and some email clients where each message is shown attached to the one it replies to, giving subtrees that often form entirely different topics.

Weakly threaded commenting Threading only shown for conversation entries from followers. Often implemented client-side, given an incomplete reply graph.

Real identity can cause some commenters to behave, particularly in contexts associated with their job.

Pseudonymous identity can give stability to conversations over time, showing that the same actors are present in conversations. If easy to create more identities, can yield sockpuppeting.

Anonymous identity can create a culture of open debate where identity politics are less prominent, but can let some people play their own devil’s advocate and can launch completely unaccountable attacks.

Cryptographic identity are interesting in that there is no central authority, and they can often not be revoked (there’s no way to ban an identity systemically without cooperation). Cryptographic names are often not human-memorable, thanks to the constraints of Zooko’s Triangle. It’s possible to work around, but the systems for doing so are cumbersome in their own right.

I’m sure there are a great number of patterns I’ve missed, but cataloguing these and calling out the differences may help make us more aware of the tools we have at our disposal in creating social networks.

Why is it so hard to evolve a programming language?

Parsers.

We use weak parsing algorithms — often hand-written left-leaning recursive descent parsers. Sometimes PEGs. Usually with a lexing layer that treats keywords specially, annotating them as a particular part of speech without that being a function of the grammar, but the words themselves.

This makes writing a parser easy, particularly for those hand-written parsers. Keywords are also a major reason we can’t evolve languages: adding new words breaks old programs that were already using them.

The alternative is to push identification of keywords into the grammar, and out of the lexer. This means that part of speech for a word can be determined by where it’s used. This allows some weird language, but it keeps things working well. Imagine javascript letting you have var var =. It’s not ambiguous, since a keyword can’t appear as a variable name, positionally. The first var can’t be known whether it’s a keyword or variable name without some lookahead, though: var = would be a variable name and var foo would be a keyword.

This usually means using better parsers. Hand written parsers could maintain a couple tokens buffered state, allowing an unshift or two to put tokens back when a phrase doesn’t match; generated parsers can do better and use GLR, and a fully dynamic parser working off of the grammar as a data structure can use Earley’s algorithm.

These are problematic for PEGs though. They won’t backtrack and figure out which interpretation is correct. Once a PEG has chosen a part of speech for a word, it sticks. That’s the rationale behind its ordered choice operator: one must have clear precedence. It’s in essence an implicit way to mark which part of speech something is in a grammar.

Backward-incompatible changes

It’s always tempting to get a ‘clean break’ on a language; misfeatures build up as we evolve it. This is the biggest disservice we can do our users: a clean break breaks every program they have ever written. It’s a new language, and you’re starting fresh.

Ways forward

Pragmas. "use strict" being the one Javascript has. They’re ugly, they don’t scale that well, so they have to be kept to a minimum. Version selection form mutually exclusive pragmas. This is what Netscape and Mozilla did to opt in to new features: <script language='javascript1.8'>. The downside here is that versioning is coarse, and doesn’t let you mix and match features. Scoping "use strict" to the function in ES5 was smart, in that it allows us to use the lexical scope as a place where the language changes too.

The complexity with "use strict" is that it changes things more than lexically: Functions declared in strict mode behave differently, and if you’re clever, you can observe this from the outside, as a caller, and that’s a problem for backward compatibility.

Support multiple sub-languages. In a parser that can support combining grammars (Earley’s algorithm and combinator parsers for pure LL languages in particular are good at this, though PEGs are not). If someone elects a different language within a region of the program, this is possible. Language features can be left as orthogonal layers. How one would express that intent is unexplored, though. Too few people use the tools that would allow this.

Versions may really be the best path forward. Modular software can be composed out of multiple files, and with javascript in the browser in particular, we’ll have to devise other methods; transport of unparsed script is already complex.

We should separate the parser from the semantics of a language: Let there be one, two, even ten versions of the syntax available, and push down to a more easily versioned (or not at all) semantic layer. This is where Python fell down without needing to. The old cruft could have been maintained and reformed in terms of the new concepts from Python3.

Confusion in Budapest

Today, in ‘European politics are inscrutable unless you’re there’, I discovered that while Hungary is a European Union member, they don’t use the Euro, but still on the Forint, with a variable exchange rate. There’s resistance to fully adopting the currency here.

I changed money to Euros in Newark since I had time to kill, but that left me in Budapest when I arrived without money that small vendors will accept. Doubly so the bus, who requires exact fare.

I ended up being waved onto the bus, however much a mistake that was, because it was so full. Disaffected drivers are a particular frustration on bus routes for me.

I ended up at the exchange point to get on the Metro (K-P P+R), but without a valid ticket to exchange. I ended up having to wander around to figure out how to get away from the Metro station; it turns out it’s attached to a shopping mall. I withdrew 15.000 Ft., considered trying to get some food at Tesco, and decided against it and went to the train.

Tickets are confusing … I misvalidated my first one, destroying it. After some I-don’t-speak-Hungarian-you-don’t-speak-English with the ticket attendant, she showed me how to validate a ticket and I finally got on the Metro.

Got off downtown, realized I was at the wrong Hilton; of course there are two. I’m at the less convenient but much more beautiful one one the hill across the Danube. At least taxis are affordable here.

I didn’t notice when I booked this hotel that it’s at the top of a small mountain. It’s not a long walk to the conference, but it’s a steep one.

On my way to Budapest

0100 AT

Airports are already whitewashed by cost, but international travel even more so: Probably 90% white people in Newark’s Terminal B that I saw, both passenger and workers alike. It’s disturbing to see such a strong filter, and I wonder what pressures are selecting this way. Is it hiring for dual language, and in a wing where most flights are to Europe, and so select for European languages? Or is there some more insidious bias?

0300 UTC

I’m flying over the atlantic right now, three hours from Vienna. It’s night, by any stretch of the imagination — early in Vienna, rather late in Boston. I spent all day in airports, mostly waiting in Newark since my midmorning flight in didn’t really come that near my early evening flight out.

The trend to denying passengers checked luggage did the usual damage on this flight: Slow boarding, people cramming bags that did not fit into overhead compartments. I wonder if European flights have different size carryon; mine was one half inch too big to fit in most of the bins, and so had to be put in sideways, taking more space than neccesary. This feels like a classic case of engineers rounding measurements to a convenient number in their native unit, saying ‘close enough’. 33 cm totally equals 12 inches, right? Close enough.

I’m sitting with two delightful women, on their way to Iran; at least one’s a journalist, and I’ve not asked for more detail from the other. They’re kind and fun, and shared chocolate with me. I hope they make their connection — they’ve ten minutes between, and a whole day’s wait if they miss it. We bonded over the difficulty of loading bags in the bins; theirs misfit the same way mine did, and even my height and ability to use brute force to close things didn’t do the job.

I can’t sleep, since the flight’s a little bumpy, and the staff keep nudging my elbow in the narrow aisles. A 777-200 is a lot nicer than most planes I’ve been on recently, but it’s still arranged for sheer number than it is for comfort.

I watched two films.

Since Bailey couldn’t come with me, I got to see ‘Guardians of the Galaxy’, which was fun, but I’m starting to get frustrated with the arbitrary plots of so many movies. I feel like in the past, directors at least tried to satisfy those of us whose sense of disbelief, while suspended, still works. Lately they do not. Every device satisfies plot, not internal consistency, and every failure is arbitrary to the same purpose. It’s the kind of lazy storytelling that leads to killing characters for emotional impact, rather than driving situations where tough choices have to be made. The first bad guy was black, and the green woman had echoes of Star Trek’s Orion slave girls. The criticism of her character development are true, she’s almost all prop to the men in the film, even if she does kick ass initially. There’s even the clicheéd rescue scene toward the end.

Second came ‘Lucy’. It’s as if ‘Kill Bill’ merged with ‘2001: A Space Oddyssey’ and ‘I, Robot’. Bad science, but at least somewhat internally consistent. Bizzarely philosophical even while there’s wonton killing on screen. I can add it to my long, long list of movies that make me say ‘huh’ at the end. I do wish there was a convention for ‘humans have potiential’ other than ‘humans only use 10% of their brain’. It’s so trite at this point that it makes me angry.

0500 UTC

Now we’re over the English Channel, heading across France. my brain is trying to comprehend the path we’re taking given the Mercator projection of the map they display it on and the 11,000 m altitude. One part of me wants to round that down to ‘well, just barely off the surface’, and the rest thinks it’s unfathomably high, and backed up by the -50 °C outside the aircraft.

At that altitude, pressure should be low enough that sustaining large mammal life is almost impossible without hibernation level metabolic change. The temperature, too, would kill within minutes. I wonder what the air pressure is inside the cabin. I’ve never found a good physical indicator using my body, but my ears have popped continuously for the last six hours.

0600 CET

I can barely see the sunrise, I think over the Rhein plain, from my seat since I’ve an aisle. It’s pretty, a dull orange and deep blue, separated by an even deeper blue layer of clouds, slowly lightening.

I’m not sure whether jetlag will hit me or not — I’ve been up for 20 hours or so, but feel like it’s morning. I hope that this bodes well. We’ll see if I make it to tonight. I think there’s a speaker’s dinner, or some other gathering leading up to the conference tomorrow. I should check, but there’s no internet connection in-flight, and we’ll see what happens on that front when I get to Budapest. Maybe I can nap on the last flight and arrive truly refreshed. We’ll see if I get a window to lean against. Chances are aisle or middle though. If a Dash 8 has a middle.

0730 CET

I’m most worried now about whether I can get a SIM card and enough data service to be useful while I’m at the conference. I suspect it’ll be fine, but likely a little annoying.

Nodevember 2014 - Sunday

@bejonbee talking about React.

He works for an interesting group of people — not the usual consultancy, but a wealth-management and self-sufficiency group, doing education. Interesting premise.

Mostly a 101 to react, but nice to see someone more familiar tie some things together.

The implications of the DOM-diffing behavior are really interesting, in that modifications made outside of React are preserved, not overwritten — React really does play nice with others.

JSX is interestingly implemented; solid work by people who really understand parsers, but they’re somewhat simplistic about the lexing, so that class is a reserved word, meaning HTML class= had to be renamed className=.

@funkatron is giving a talk on “Open Sourcing Mental Illness”.

His talk’s been recorded 14 times(!) and he has copies up at funkatron.com.

Comparing eye disease — needing corrective lenses — to mental illness. Awesome! “How many people here have just been told you need to ‘squint harder’” .. no hands.

“how many of you would feel comfortable talking about a coworker you knew pretty well about having cancer?” Most hands.

“How many would feel comfortable with talking about your mental health?” Maybe 1 in 5.

Moderate depression`has a similar level of disability to MS or severe asthma.

Severe depression has a similar level of disability to quadrapeligia.

“You are so brave and quiet; I forgot that you were suffering”
–Ernest Hemmingway

Watching @derickbailey‘s talk on development environments and getting out of IDEs, looking for advice to give to developers at PayPal.

I just realized that Grunt looks a lot more amazing if you’re coming from a heavy IDE with lots of features but no flexibility. It’s amazing what perspective looks like!

And now to @hunterloftis “We are all game developers”

He built a game for a major music artist in the UK in three weeks, using software rendering. Great art and integrating the music.

Now 1.7 billion WebGL devices were shipped last year. It’s available on IOS 8!

“We avoided a lot of work by avoiding state” — since most rendering is built with pure functions from a tiny, immutable state, lots of things like animation speed came out naturally. Then add websockets and the state from one user’s view controls a renderer on the server. Super clever.

requestAnimationFrame can drop frames, so time has to be calculated and perhaps multiple simulation ticks (to assign position and state) to keep time constant and not dependent on the computer’s speed. He points out that this affects testability, and rendering and simulation have to be decoupled.

Simulate faster than rendering: otherwise, tearing and sync problems.

Made the audience laugh with an awesome bug, trying to simulate rigid body physics, a simple box, which in trying to make it behave right flaps around the screen like a bird, squishing and flopping before it pops into final shape. The take=away though is that if physics is written deterministically and not depending on the render loop, the bug is repeatable — and it’s possible to know the bug was fixed since the simulation is deterministic.

And techniques for controlling state and using immutable objects apply greatly to DOM rendering and apps too. React uses determinism to great effect.

talks I missed

I’m bummed that I’m missing @thlorenz‘ talk on heap dumps for mere mortals, but I’m making a note to have good conversations with him after the fact. (He’s already got his slides up!)

I heard that @cdlorenz’ “Runtime Funtimes”

@nicholaswyoung‘s (Original Machine) talk on microservices.

“We learn more from things going wrong than things going right”

Divorced the CMS domain from the podcast feed domain, and separated out the web player software.

“When we release a new show, we get 10,000 requests to download the show in the first few seconds. Ruby wasn’t the utopia we thought it was.”

“I build a monolith because that’s what Rails conditions you to do. The admin dashboard, the analytics, the feed generation all in one app. You put your controllers in your controllers directory and your views in the views directory, and you end up with a monolith.”

“Our media services would hold requests and it would take users 4 seconds to load our application”.

“I didn’t initially know how to made node work. First thing I did was write a monolith. I thought the runtime would save me. I’m smart

Seeing the core broken up into such isolated modules, you get the idea that breaking things up is a good idea.

“It’s an application. I guess you could call that a service

Nodevember 2014 - Saturday

Nodevember 2014 kicked off this morning in Nashville after a super fun Nodebots meetup last night.

@elizabrock‘s keynote was a fantastic review of where we came from and who we are as an industry: Not just computer scientists, but computer scientists; not mathematicians, but often doing mathematics. Not just engineers but doing engineering. Not all artists, but doing art.

We’re training the fourth generation of programmers now: The women who programmed the early computers during World War II could be our children’s great grandparents.

I missed @jeffbski’s talk on hardening Node.JS for the Enterprise, but the slides look great and I heard great stuff. Also fighter jets, befitting a talk from someone who was USAF!

Good talk from @ifandelse on ES6. The future is now, for sure. Coming up fast.

@mrspeaker gave a great, fun, funny, reference-filled talk on Gonzo Game Development. Lots of great quotes, and talk about the line between engineering and art.

@katiek2 gave a great intro to Nodebots, and what’s needed to run a good meetup. It’s tempting to do one in Boston / Somerville. Totally @rwaldron‘s turf, and would probably be awesome.

My own talk went well, though my voice gave out part way through and I ran fast. I wasn’t planning on questions, but I guess if you engage your audience, they’ll ask ‘em anyway.

A good intro to couchdb from @commadelimited

And now, “Make art, not apps” by @thisisjohnbrown — simple algorithms! Relating touch to display. Looking up file formats so you can aim glitchiness at interesting places when you corrupt data. Simple trials of “let’s see where the code takes us” become best-of-show art pieces.

He made a “plinko” board (like pachinko), and wired it up to a projector and board, and used it to trigger particle animations. The demo gets “Aaaaahs” from the crowd. Super simple effect but totally wow.

He showed off Iannis Xenakis’ music from the 1950s generated from gas molecule interactions, Frieder Nake’s art created with markov chains. People have been doing wonderful art with code and algorithms for a long time!

And ending with a demo of the Neurosky device and Neuronal-Synchorony library together reading brainwaves and generating output, both audio and visual — imagine that being combined into a multi-person dance party!

Homework: Do your own art. Lots of options!

  • Uncontext: Structured data source without the context or rules for how they’re generated, but a source of data to do art with.
  • p5.js

Walter Benjamin’s 1936 essay on mechanical reproduction and art and what has done to art, if you change ‘mechanical’ to ‘digital’ and you have a manifesto for creative coding.

Time for a party!

Not a moment too soon

It wasn’t a moment too soon — and in fact a few too late — that I moved my site from Wordpress to Hexo. The other two dozen — not just my friend’s blogs — Wordpress sites on the server — with versions from 3.8 to 4.1 — were broken into and scripts created that would send mail. Some interesting features of the hack though!

  • They installed PHP with innocuous-sounding files like gallery.php inside of plugins and themes for Wordpress.
  • They installed a .so file, loaded it into the /usr/bin/host program with a dynamic loader trick, then deleted the .so so it’d be hard to find. This created a daemon used to send junk mail, and quite efficiently too.
  • Having PHP record what URL was posted to when sending mail is the best thing ever for tracking this down.
  • lsof is great for verifying that things are shut down.
  • They wrote to every directory that they had privilege to that was web accessible. Very adept hack.

Ugh.

Blog Migration

I just moved my blog over from a Wordpress installation to Hexo in a fit of frustration after five friends blogs were broken into and used to spam via a hole, apparently, in Wordpress’ Jetpack. The danger of leaving complex software unpatched for more than a day is becoming impossible, and I don’t use most of the features of Wordpress anyway, given that I’ve increasingly had an allergy to comments and most other more dynamic features, and I author in Markdown anyway. Being able to do this tidily in vim makes me happier than editing in a web browser anyway.

I chose Hexo because it had a working migrator to import a dump from Wordpress; no other reason, really, but its design works well enough (even if it is slow to generate the static files given my nearly 1500 posts). URLs were preserved with little hackery, too, so I didn’t break the web in the process.

I still want something better: I’d be happy without pagination to avoid rebuilding a 1500-entry latest-first archive every time I add a post; style files don’t seem to get updated properly (that is probably a more trivial bug that I could fix), and something that’s more directly in tune with the dependencies between the source files and the generated pages would be delightful. Maybe I need to make something with Broccoli or even just make(1) or tup.

Telcopunk

So we’ve had steampunk and dieselpunk, cyberpunk and seapunk.

My I’m going to call my aesthetic ‘Telcopunk’.

I favor practicality.

I believe in universal service and universal access.

Utilitarianism rules.

Research is important.

Unions are good.

Work locally. Think globally.

Distance is expensive.

Connecting people is important.

Information is and should be a primary concern of industry.

Designs should be made for durability.

An important job is building and maintaining infrastructure.

Privacy — but not security — is a core value, and standards of conduct reflect this.

Jeans. Work boots. Gloves.

Conceive things, then make them.

'How do I get good at programming?', I'm asked.

Read. Write. Publish. Repeat.

And in general, people’s opinions are meaningless without data to back them up. So ignore the haters.

Ignore the people saying you’re doing it wrong unless your job depends on it or they have good reasons.

People will tell you “javascript will die” or “ruby is just a fad”

Ignore the haters.

But also ignore the haters who say “java is stupid.”

And ignore the haters who say “OO is wrong”

And ignore the ones who say “OO is the only way” Or “OO is the best way” too.

But listen to the people who say “have you considered a different approach?”. Those are the good ones.

Strong suggestions for structurally combatting online harassment

Craig Newmark asked for suggestions and here’s some things I came up with:

  • Create block functions that actually work and completely block all interaction with a user.
  • Create a mute function that doesn’t get tangled in block.
  • Respond to abuse reports, generating at minimum an inter-user block, but that when they actually involve any kind of escalation by the abuser, a block of that user from the service (or other highly quarantining action).
  • Encourage use of pseudonyms rather than complete anonymity, if only to encourage a stable handle to block by.
  • Spam-fighting-like statistical models to detect outlier behavior — repeated first contacts by someone who’s been reported as harassing is one particularly significant sign. Being proactive and confirming with the harassed user might even make sense. “Is @username bothering you?”
  • Allow communities to segment when possible, rather than encouraging all users to share one single graph.
  • At least three-level privacy controls per account: Public, initial contacts restricted to friends, and all contact restricted to friends.
  • Create transparent policies and processes, so we can know how effective the service will be in supporting us if harassed, rather than shouting into the void, wondering if anyone actually reads these reports. If the policies or processes change, say something!
  • Do use decoy selections in report abuse forms, but keep it simple: “This is annoying” vs “this is dangerous” can be differentiated, and the decisions about how to handle those should be different.
  • Don’t patronize the people you’re trying to protect. Leave choices in the hands of those needing protection when it’s possible. For tools for protection that have downsides (social cost, monetary cost, opportunity cost), let those needing protection opt in or opt out. If the tools are independent of each other, let them be chosen à la carte.

And a rule of thumb:

If you spend less time fighting harassment than you do fighting spam, your priorities are wrong. If you take spam seriously and don’t take harassment seriously, you’re making it worse.

An unofficial mission statement for the #node.js IRC channel

This is the mission statement I seek to uphold when I provide support on the Freenode #node.js channel.

To support users of node.js and their related projects to create a collaborative,
creative, diverse, interested, and inter-generational sustainable culture of
programmers of all skill levels, with room and encouragement to grow.

One of the great criticisms of IRC channels for software support is that they’re often the blind leading the blind. Experts have little direct incentive to jump in with half-formed questions, and it takes some real skill to elicit good questions that can be answered accurately. There’s some incentive for some members to use questions to opine their favorite tools, and to show off clever answers not necessarily in the best interests of the person asking.

The other problem is times of day — American nights and weekends have a lull, and questions asked then are often left to the void. Hard to answer questions — vague and incomplete ones especially — are the easiest to ignore. Let’s do the hard work to encourage good discussion, even among the less carefully asked, hurried questions.

We can do this and be unusual among technical channels. We’ve the critical mass to do it, and we’ve a great culture to start with. Let’s keep it up!

A Tale of Two Webs

originally posted on Medium

There’s a sharp divide between the people who make the Web, all of us, everywhere, and Silicon Valley Tech.

It’s a cultural divide I’ve seen come up again and again and again in discussions of tech culture.

On one side, we have the entitled, white frat-boy mentality of a lot of Silicon Valley start-up companies, with a culture going back years in a cycle of venture capital, equity, buy-out or IPO, repeat; a culture often isolated from failure by the fact that even the less amazing exits are still a solid paycheck. I suggest that this grew out of American industrial culture, the magnates of the nineteenth century turned inward into a mill of people all jockeying to be the next break-out success.

On the balance, we’ve the people who make the Web outside those silos. The lone designer at a traditional media publisher, doing the hard work to adapt paper print styles to the rapid publishing and infinite yet strangely shaped spaces of browser windows. The type designers who’ve now made their way out of lead blocks and work in Bézier curves. The scientist at CERN who realized that if every document had an address, scientific information would form a web of information. They don’t labor in a Tech Industry, they labor in their industries — all of them — connected by a common web.

In media, it appears as one giant “Tech industry”, and perhaps this is bolstered by the fact that a great number of people don’t know what a lot of us do — a software developer and a designer are so much the same job to someone who’s not paying attention to the details.

And yet, on Wednesday, a great many people turned their Twitter avatars purple in support of a family who’s lost a child to cancer. Looking over who they were, something dawned on me: They were some of the best and brightest on the Web. Authors, developers, designers. The people who know and know of @meyerweb are the people who make the Web. This is the Web I care about, have always cared about. It’s the web of caring and sharing, of writing and collaborating. We take care of our own.

In skimming over the people who’ve gone purple, I notice one thing: The bios that list locations say things like “Cleveland, OH”, “Chicago, IL”, and “Cambridge, MA”. “Bethesda, MD”, “Phoenix, AZ”, “Nashville, TN”. “1.3, 103.4”. Their titles are “type designer”, “map maker”, “standards editor”, “librarian”, “archivist”.

And far, far down the list, a few “San Francisco, CA” and “San Jose, CA”, “Software Developer” and “Full-stack engineer”.

2112

You do occasionally visit Boston Public Library, yes?
If not, get on it! You were raised in and on libraries. They are in your blood!

You called me out rightly on that one! I’ve never actually been inside the BPL
— it’s on the Green line, the cantankerous part of the subway — and I just
haven’t been out there. Somerville’s is pretty limited — not nearly as big as
Englewood’s library, and it’s got a selection that’s definitely not aimed at
me.

I just saw the Arlington library night before last, actually, and it’s this big
huge modern building, it reminds me of the Koelbel library we used to go to.
It’s the first one I’ve been so excited to try to go to in a while.

It’s funny that you bring this up right now. I’ve been reading article after
article for the last year, but especially in the last weeks by librarians and
book publishers and authors talking about what the role of libraries are in a
world where it’s relatively easy to get ahold of the actual text anywhere and
anywhen.

There’s a whole argument that libraries are obsolete; a lot of this came out of
the crazy world of the California tech scene, where there’s this huge
Libertarian ‘government is evil, technology will solve all our woes’ thinking,
but that tends to assume that everyone is on average white, male, and upper
middle class. They’ve got a point, though, that for pure access to thought and
information, the Internet has done something unprecedented.

But libraries serve a few other purposes that e-books and the Internet can’t solve.
So many of my queer friends pointed out that that libraries were their refuge as
kids and teenagers, from a world that was pretty intent on being horrible to them.
Often they come from families that were more than borderline abusive, and the
library was their safe place. There’s a whole generation of us for whom that rings
true, and kids coming of age now less often say that — but there’s never been
anything to replace that need for them.

Libraries are one of the few first-class public service, one of the few that
historically has ignored what economic class you’re from and has just provided
a service to everyone. That’s starting to change in some ways — inner city
libraries are starting to think of themselves as intervention points for kids
who won’t have access to reading before school, for poor families who can’t
cross that ‘digital divide’ and get on the Internet, they’re buying computers
and setting up more and more space for non-book-oriented services. They’re
focusing on the poor around them and abandoning the universal service model.

(I read a great quote today — “In Europe, public services are for everyone. In
the US, public services are for the poor who can’t afford a private
alternative” — and libraries are one of the few services where that’s not been
true.)

I’ve never been too keen on the model of librarians-as-authorities to appeal to
for information, but even so, having someone who knows the information out
there and can guide you is super important — it’s the role teachers really
should play, but don’t.

There’s a lot of thoughts on this rattling around in my brain trying to escape
coherently, but nothing’s made it out beyond this yet, and certainly not me
figuring out how I fit into it yet. Libraries are in my blood, but I’m not sure
if the thing I’m after is there, or if it’s something more abstract that I’m
chasing.

Anyway just wish we could be sharing another book together.

I’d like that, a lot. I think that’s one thing that’s been lost in the mostly
fast-paced tech words world is sharing thoughts about a big piece of writing. I
comment on blogs and articles, and discuss on Twitter a lot, but books don’t
have the convenient handles where you can just link to them and highlight
something and say “THIS is what’s right about this”. I want to share some of
those things and it’s not happening as much as it used to. I miss sharing them
with you!

Aria

Recipe: Storm in the Garden

Recipe: Storm in the Garden

Ingredients

  • 10 ml lavender vodka
  • 10 ml orange vodka
  • 10 ml hibiscus vodka
  • 200 ml ginger ale
  • ice

Instructions

  1. Drop the ice in a pint glass, pour in the ginger ale. Add the vodkas layered gently on top, ending with the bright red hibiscus.

Preparation time: 2 minute(s)

Number of servings (yield): 1

My rating 5 stars:  ★★★★★

Having vs. Owning | ps pirro

Sometimes people get confused about the difference between having something and owning it.

“I have an ipod” signals ownership. “I have a dog,” or a child, or a spouse, implies a relationship, a mutuality between sovereigns. Things get messed up for us, and for those with whom we are in relationship, when we confuse the one for the other.

Ownership denotes control. Relationship is wrapped up in reciprocity.

Ownership is unilateral. In relationship, something is always owed to the other. Always.

As a general rule, if a thing is alive — and for the animists among us, this includes pretty much everything — what you have is a relationship. Even if the law says otherwise.

Having vs. Owning | ps pirro.

Some thoughts on configuring web servers

If there’s one thing that has always made me annoyed running a web hosting and services business it was the low level details of configuring virtual hosts in Apache and every other web server on the planet.

It’s all scriptable, but it’s error prone and completely graceless.

Users want to be able to define their own rules.

Apache configuration syntax, when included, can break the entire configuration. It’s not dynamic. Reloads in a hot web server can be expensive.

Ngingx and Lighttpd are marginally more consistent, but still stink at delegating.

Configurations are sometimes order-dependent, sometimes evaluated root to leaf node, sometimes leaf node to root, and sometimes require recursing into the request handler to make decisions based on on “what if” scenarios.

I’d willingly trade a lot of power in configuring a web server for something simple and able to be delegated to users.

There are some basic requirements:

  • Ability to configure redirects (and custom responses) for specific URLs and for entire subtrees of URL space. (I’m of the opinion that this should often not be handled at the application layer, since it’s most often needed to deal with changes to URL structure during application upgrades and transitions.)
  • Ability to map URLs to handlers located within the document root, without exposing the filenames of those handlers. (Thank you, PHP, for moving us backward 5 years in URL structure in an effort to teach us how simple deployment should be.)
  • The ability to direct entire subtrees to a handler.
  • The ability to direct entire subtrees to a handler if the request is not satisfiable with a url-to-path mapping.
  • The ability to direct requests to a handler if url-to-path mapping yields a file with a particular suffix (or perhaps indirectly via MIME type)
  • The ability to tweak url-to-path mapping if url-to-path mapping yields a directory.
  • The ability to add a variable passed on to a handler at any point in the subtrees of URL space, including setting it to a value from any part of the request headers, including a fragment of the URL.

And operationally, I want to be able to delegate the configuration of entire virtual hosts and preferably also subtrees of URL space to users, and have them only able to break the parts delegated to them.

1944 Red Velvet (cup) Cake

This is a red velvet cake made in a WW2 era way, using beets for moisture and color. The trick to getting good color rather than mud is to keep the batter acidic: lemon and buttermilk and a complete lack of alkaline leavening are what make this recipe unusual.

Boil two medium beets and puree. (You need one cup)

Cream two sticks of butter with a cup of sugar. Beat in two eggs as completely as you can.

Mix two tablespoons of lemon juice with 3/4 cup buttermilk. Add a cup of the beet puree.

In a bowl, mix a cup of flour and a quarter cup of natural (non-Dutch process) cocoa powder. (I used Hershey’s).

Beat the three mixtures together, adding some of the butter, egg and sugar mixture alternating with some of the beet and buttermilk mixture.

Pour into greased cupcake pans and bake at 350 until a toothpick or straw comes out clean.

This will be a soft, moist cake, almost custard. It released from the pan easily for me, though my cupcake pans are cast iron and a little unusual.

I used most of my batter as a layer under a cheesecake, but that’s a story for another time.

The world is a complicated place

Part 1 of an ∞ part series

:; cal 9 1752 &nbsp;&nbsp;&nbsp;September&nbsp;1752 Su&nbsp;Mo&nbsp;Tu&nbsp;We&nbsp;Th&nbsp;Fr&nbsp;Sa &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1&nbsp;&nbsp;2&nbsp;14&nbsp;15&nbsp;16 17&nbsp;18&nbsp;19&nbsp;20&nbsp;21&nbsp;22&nbsp;23 24&nbsp;25&nbsp;26&nbsp;27&nbsp;28&nbsp;29&nbsp;30

Cephalapod Surprise Chowder

Chop four slices of bacon and start cooking them over medium heat.

Chop one small onion, three small carrots, two sticks of celery.

Add them to the cooking bacon, with the fat.

Let them cook until the onions start to go transparent.

Add a cup or two of beer.

Add 3 or 4 fingerling potatoes, cut into small bite sized pieces.

Add water to cover and let this simmer until the potatoes are soft.

Chop four or five small squid into half-inch square pieces. Tentacles can be left in larger pieces.

Put these in a pan with a few tablespoons of melted butter. Cook briefly until the squid firms. (Ten or fifteen seconds, thirty at most.)

Add the squid to the simmering potato mixture.

Add a cup or two of small scallops, and a cup of small shrimp.

Cook a roux, equal parts butter and flour until the flour is golden-brown.

Add it to the simmering mixture and whisk to combine, and remove the heat.

Add 3/4 cup of heavy cream, or 1 1/2 cups of half and half.

Let stand for a bit, and serve.

Season with salt and pepper, and add a quarter cup of chopped fresh dill.

Simmer until warmed through again. Don’t let the scallops overcook.

Let cool slightly

A simple primer on cryptographic primitives

A field guide

Or “don’t trust anything that screws these up even slightly.”

Key

A private, hard to guess piece of information, meaningless on its own, but used to secure other pieces of information.

Public Key / Private Key

Specifically, these are keys with certain properties: They come as a pair, they’re usually a couple prime numbers (which are mathematically hard to factor, which is where their security comes from)

Things encrypted with one key can be decrypted with the other and vice versa.

Hash

Using a cryptographic hash function (which is often based on an encryption function, but not always) takes an often big piece of information and turns it into a fixed length token that represents it, in a hard to fake way. Even small changes will make a cryptographically strong hash function change its output entirely.

Some example hash functions: MD5, SHA1, SHA256, SHA512

Signature

The result of using a key and a hash function together on a piece of information to give some proof that the information wasn’t forged. If the key and signing algorithm used are public/private paired keys, then the public key can verify that the information was signed by the private key.

Certificate

A signature on a public key, and usually some ID information. If the certificate was signed by a trusted party (trust is a complicated thing, though) then there’s usually some assurance that the information signed by the the private key that matches the certificate is from a known source. Of course, can you spot a forged ID?

HMAC

A way of hashing information with a key securely to form a signature that can’t be altered. Turns out that if you just start with the key and add data to the end of it, then hash that, an attacker can keep adding things and keep running the hash function from where it left off and the signature will look valid. HMAC mixes the key with the information being signed in a way that prevents this.

Salt

When you’re using a hash to make information hard to brute-force, you make sure that an attacker can’t just build a list of all the likely things and see if you have them by adding randomness to the thing you’re hashing. Now, since this changes the hash value, you have to include it in a way that the thing comparing the hash can do the same way, so a salted hash often looks like data + salt = $salt$HASH. Usually this is combined with a very slow, hard to do hash function, so you can’t just whip through all the possibilities on a fast computer in a day or two. Computers keep getting faster, though…

TL;DR

Key = random unguessable; Key + hash = signature, signature + keypair = certificate; Hash + salt = hard to crack hash + salt.

Quote: Design Philosophies of Developer Tools

One of the nice things about Git is how its internals are both exposed for the world to see and thoroughly documented. We can easily write scripts to automate common tasks or create different workflows. With a bit more effort, we could even write new tools that integrate with the Git suite. These tools can do things that Git’s authors never intended, as long as they follow the documented repository structure. Git isn’t so much a version control system as the means to construct one.

And</>

All of the Ruby development tools have independent release cycles, and they don’t seem to plan or coordinate with one another in advance of each release. Integration testing is left up to the users.

Design Philosophies of Developer Tools (via Digital Digressions by Stuart Sierra)

Some very good thoughts.