Reading Week Ending 2/4

Yes, reading includes watching videos sometimes.

[…]suggests something of what it has truly meant, over the centuries, for people to read. This is all about paying attention, listening to what others (and not only human others) have to tell, and being advised by it. In Old English, the word ‘read’ originally meant to observe, to take counsel, and to deliberate (Howe 1992). One who has done so is consequently ‘ready’ for the tasks ahead.

On Being Tasked with the Problem of Inhabiting the Page, via Ruth Malan, referencing Nicholas Howe, “The Cultural Construction of Reading in Anglo-Saxon England” in The Ethnography of Reading (1992),

Reading January 2024

I’m off my regular routine due to travel, so I’m sparse, but here’s some interesting bits.

Reading Week Ending 12/30

Things I’ve been reading this week:

  • “Facts, frames, and (mis)interpretations: Understanding rumors as collective sensemaking” by Kate Starbird at Center for an Informed Public @ UW. An excellent article on how we make sense of evidence and rumors and how disinformation works collectively
  • “Value Capture” by C Thi Nguyen. An excellent philosophy paper sussing out a concept I think people would be well to do to think about explicitly.
  • “Presentation, Diagnosis, and Management of Mast Cell Activation Syndrome” My husband has MCAS, and I suspect a really significant number of people do too. The literature paints it either as very rare … or about 1/5 of the population, depending on how you set thresholds and understand the disease. It’s easy to over-fit to, so it’s worth reading skeptically, but man do I ever know a bunch of vaguely but seriously chronically ill people who fit this mold. It seems to affect autistic people and people with ADHD more than most? It might be part of the bendy-spoony-transy syndrome.
  • “The C4 model for visualising software architecture”
  • “Untangling Threads” by Erin Kissane. Thoughtful work as always, and well worth a read in thinking about how federated social media should work when Meta the company starts getting involved.
  • “The Dark Lord’s Daughter” by Patricia C Wrede. A cute fantasy book so far. Aimed at kids but she was a favorite author of mine as a kid, and now is a favorite writer of writing advice now as an adult. I like to keep tabs on what she’s up to.
  • “Hella” by David Gerrold, for a book club. Fun so far. Space colonization on a planet with dinosaurs, with an autistic protagonist.

On queerness and representation

A writer friend of mine wrote a pretty good essay during his July blog-post-a-day ambitions about queer protagonists. It’s a quick read, and he does a pretty good job of answering “why so many now?”: representation matters. After so much exclusion with men, usually white, at the helm of every industry — this includes the commercial arts — there has been a moment in the sun for queer writers, and so many of us have been honing our craft on fan fiction, much of it exceptional, and now bursting out, refulgent into an industry which while it still centers those who are white, straight, and men, has given us enough space to at least be visible and successful for the time being. The wheels of justice, righteousness, and recompense have aligned for the time being, too little too late, but still: we’re here.

They write:

Genre fiction has always been where societal boundaries are stress tested first. Genre fiction is where progressive voices get to practice. When the stories are exploring what could have been or what might be, sometimes the narrative dives straight into what should be.

Presently, there should be more queer protagonists. There should be more queer writers, writing queer protagonists, celebrated by audiences, queer or otherwise.

It’s not lost on me that we get lumped into ‘progressive voices’ — and we are — but we’ve been here for a very very long time. We’re dissenting voices, hidden voices, erased voices, progressive voices, voices of people stuck in a conflict that has moved on without us, voices of the long-marginalized. All of these are long standing social processes, not a new phenomenon, a new frontier being carved out suddenly. We’ve always been here. Fan fiction itself, the refuge of writers creating what the main stream will not give them, has much of its current structure from the idea of ‘slash fiction’ (gay pairings) that came about specifically in Star Trek fan fiction, mostly from women and queer writers.

There should be more queer protagonists: when I was growing up, it was said that 2-4% of us are queer; I heard some people say 10% and at the time that sounded overstated, but now I’m convinced that’s deeply underestimated and truth be told, as I come to understand the processes of queerness, sexual attraction, identity formation, oppression, and marginalization, it’s now my habit to see not the proportions as some fixed number, but the result of processes of how we, collectively, conceive of ourselves. If more than two thirds of us can figure this stuff out by the messy process of living it, a writer can figure it out by listening. These are dynamic systems, and with the increased visibility, whole new groups of people come to understand themselves new ways. And this is good. It’s an alternative to the ugly truths about how we have conceived of ourselves before: whiteness was created to justify slavery. Straightness was created to reinforce ideas of family that support systems like capitalism and corporate dominance. These aren’t neutral defaults, but evolved systems that benefit people.

But more than that: if genre fiction is the place of imagining a new future or alternate past, queerness is itself a subject for genre fiction. It is the place we imagine new ways of being. It does a disservice to the idea of genre fiction to rope off some pieces as a do not go zone. We are, in fact, the sort of people who figure this stuff out, repeatedly, for character after character. We must open our future and look at it honestly.

There are lived experiences that I cannot claim, experiences that many queer readers would expect from a story that is meant to speak to and represent them. It would be wrong of me to try and write a queer story. There are other writers that can write that, and we should make sure there is room for them to do so.

They’re not wrong about the last part — we have been denied too long, and the room to do so is much needed — but I want to challenge this: like anything else in a market, often it’s not as simple as competition for a place, but instead, good stories in conversation with each other and new entries and aspects of these things create markets, and expand both access and success of all within.

It’s certainly true that straight people, almost entirely white and men, have dominated the industry, and stand an easier chance of being published than their peers who are not. But at the same time, it’s also a matter of lifting each other up. It’s not writers who are in the way, it’s publishers and the power structure that filters so terribly. That’s the place to fight: with success and publication, we get the opportunity to recommend and include others. We lift each other up. There’s a tendency to gate-keep, especially when we feel like we are spending our reputation to uplift others. That follows from the nature of the industry, but we can upend it. Instead of looking to the power brokers, the decision makers for what’s good, we can listen to each other, and the marginalized among us for the stories that aren’t being told, aren’t being published, and we can both write them and bring the authors already doing so into the light.

I can include queer characters in my stories, though. My main character can be queer, as long as I don’t make that the focus of the story. Some folks are gay. Some folks have dark hair. Some folks have gluten allergies. These are descriptors, and not necessarily character defining traits.

It can be a little confusing when a story is appropriation, and when it is representation. When in doubt, there are readers that can provide feedback and help the writer keep from doing harm with their stories. Misrepresentation and stereotyping can be extremely painful and continue a cycle that oppresses or mischaracterizes people that are already not well represented. So, hire a sensitivity reader, and listen to them if they tell you that you’re doing harm.

It’s not wrong advice in the slightest: if you’re not of the group, you’ll rely on the relationships with people who are for sensitivity. Hire a sensitivity reader, pay them well, and listen to what they have to say. But so too, a sensitivity reader can’t represent a whole community with its diversity of opinions. We have to go deeper. We have to cultivate a plurality of relationships. Listen, but also listen to the theory behind what they’re saying.

But here’s my challenge. Write the story with the queer main character where that deeply defines their life. I don’t mean necessarily a coming-out story, or a story wallowing in the oppression, but it’s okay — and I’d argue necessary — to do the work to really understand what makes us who we are to write good genre fiction.

Some of us are gluten-sensitive, and it’s just a trait that adds a bit of complexity. Sometimes it’s a thing that took decades of our life to chronic illness, defined our relationship with our families, the medical establishment, the very idea of work. So too with queerness: it’s not always flavor text, a bit thrown on top to give a bit of diversity to otherwise straight characters. In many ways, the approach of not letting queerness be a character-defining trait is itself a kind of tokenization: you can have a queer character if they’re not too queer. You can’t be progressive if it doesn’t upset the status quo.

Stories upset the status quo, out of necessity. Genre stories of often upset the whole status quo, the very ideas that our world is built on. That’s what makes them great.

Make no mistake here: I’m not saying that if you’re straight you shouldn’t write queer characters, if you’re white you shouldn’t write racialized characters. But it does mean we need to learn, to listen, to understand and be clever. We need to both extrapolate from the information we do have, but also listen to those unlike us for the information we don’t have. You don’t just have to listen to your sensitivity readers (though you’d do well to do so!), you have to listen to the world around you, for the things that challenge the very ideas of how you think things are. As a writer you’ll grow from this. We can grow beyond the fear of doing harm and into a well forged alliance of authors supporting each other, uplifting the more marginalized among us, sharing and understanding their stories not just when they’re written for us but when they arrive in their full complexity in a world that may not be ready for them. We need to cite our sources for some of our ideas. Two takes on the same thing uplift each other, and if we find ours takes space from the other, we should uplift the other, not shrink to the shadows, hiding the much-needed idea from the world.

Writing about queerness feels like an expanse of shifting terms, pitfalls under mundane seeming appearances, but that too is an experience of queerness. In my own lifetime, the word you’d use to refer to someone like me has changed not once, twice, but three times. That hesitation and discomfort, that desire to get it right and play it safe is one of the forces acting on queer people too.

Write the queer main character, but be prepared for the learning that will happen, both in the criticism but even more deeply in the introspection. Queerness is on one hand a mere fact of life for some people, but in another, a foundational relationship to the world — not always friendly, but sometimes it is, an in either case, can affect us to our core. Being non-white too is an experience of marginalization, but it is also a natural joy to exist in a skin and family and community that is very much who one is, inescapable. As a white writer, we will find both the marginalization and the joy uncomfortable. As a straight writer, likely the same for queerness. The discomfort will reveal stories you thought you could never tell, and if you nail a story, really seeing an aspect of the experience, that enriches us all, proving that we really can understand each other.

Less screaming into less void

How to stop holding social media wrong and start holding it right, or: an awkward introvert’s guide to not feeling quite so unheard.

Today someone said to me

absolutely nobody ever reads what I write.

And that’s a sentiment I’ve heard a lot from people who call themselves introverts (I have a whole batch of opinions about how we’ve decided to construct “introvert” and “extravert” as a society. I think the concepts are more harmful than helpful), but I think it’s a common enough thing to happen given some combination of social anxiety, a tendency to think about things (even over-think) before speaking, and being sensitive to the perceived status of others.

But in 30 plus years of talking to people on the Internet, including both being one of and dealing with people who are struggling in exactly this way, I have some some strategies, as well as critical reframing to enable some healthier social media use. If this sounds like you, “nobody hears what I have to say”, this is for you, but it’s not going to be super easy, because some of this is about changing goals.

This won’t make you an influencer, it’s not a guide to getting a thousand followers (though it might do that), and it certainly won’t get you a million. I’m just not interested in social media where I’m performing for an audience.

Social media, like most human communication, doesn’t work well with goals approached head on: human attention is a limited resource, and where advertisers and people who want status are around (and that’s everywhere), naively seeking attention doesn’t work. This is not to say that wanting attention is in any way bad: this is a core human need, the care and attention of others is a key part of maintaining our psychological health, but going straight for it without developing the relationships to support it is anti-productive.

You’re not alone: I’d estimate about half of everyone on the Internet feels this way much of the time: unheard, wanting to express themselves, but mostly feeling like if they said anything, they’d be screaming (or whispering) into the void.

My advice is built for social media roughly the shape of Twitter: Mastodon, Twitter & Meta Threads, the sort of place where you can cultivate a relationship with other people existing in public, where things are relatively open, and there’s no strict idea of membership. Some of it may work in other platforms, but the public nature, equal footing, and conversational style are all aspects that I think help. They also have some particular dangers—you are talking in public, so if things veer toward things you wouldn’t want the public to know about you, it may break down, as well as the ever present specter of harassment, though I think relationship building is one facet of making the Internet a safer place to be in public.

Goals

The first order of business is to look at goals and reframe them: most people in this position want to be heard, and second to express themselves; this is actually a complicated thing. What that usually means is that we’re seeking human connection, we want response, and relationships forming based on things we find important to us. It’s not just about being heard, but listened to and included. Being heard and expressing ourselves, while they’re the things we’re lacking, happen best as side effects of relationship building, so the goal is not to be heard but to have good conversation over a timescale of weeks.

We’re used to conversations having a very functional purpose: to convey information, to make a request, to answer a question. This is actually an unhelpful thing, because good relationship building is open-ended: answered questions, requests accepted or denied, or information conveyed are end states, they are conversation enders. Instead, building relationships is open questions, persistent interests, and ongoing history. Not to say that those functional components don’t have their place, but they’re not the point.

We’re also used to conversations as they’re portrayed in media, and modeled spoken aloud. While short-form social media has a lot of similarities to these, they’re distinctly different: they’re asynchronous (replies can come hours or years later), they’re slow (even though they’re often quite timely), and they’re open-ended: unless we lock down our accounts or disable replies or whatever the platform allows, the conversation can usually continue in some form down the road, whether as direct replies or just picking up the topic again with similar groups of people.

Who to approach

One thing that all of these platforms have in common is that anyone can follow anyone (more or less; some accounts are locked). Brands want to be followed to disseminate ads and garner attention; influencers are also seeking an audience. Some people just feel compelled to have an account to announce what they’re working on. All of these are very asymmetrical relationships with their followers, and are not particularly likely to be people you form good relationships with. At best they will be parasocial, where the connection is one way and mostly imagined, and at worst they will make you feel like you’re shouting into the void.

Next, it’s very easy to discount someone who’s an expert or (in your mind) highly thought of about the topic you’re interested in. Don’t! There are times when they won’t really be open to communicating with you, but by and by large, experts in things—especially the sciences or anything niche—love to talk about their topic. The question is can you relate to them on a useful level? If you don’t mind listening to an expert be mostly intelligible but sometimes end up talking about nuances or specifics you don’t know, go for it! If they only talk about things that are completely over your head, or they’re mean about it, they’re unlikely to be a good match.

One other thing to watch out for: don’t ask for people’s services for free. Don’t ask for specific health advice from doctors, legal advice from lawyers, art from artists. That’s what being a paying client is for. Now if you want to ask them their opinion about the world instead of about your situation or interest or desire, go for it.

But what you really want is to find a group of people who are interested in things you want to talk about in a similar way: do you want to hear other learners talk about the basics of a thing? Experts in the field? People passionate about the thing? People who want to talk about the ways their work connects to the world? People who created the thing and what they think about what they made? Or people with a similarly fannish interest?

If you want to talk fan theories about a work, follow and talk to other fans. If you want to hear about the creation of a work, follow its author, editor, publisher or people interviewing them about that. In some fields it will be a close and mixed circle. In some it will be very different groups, maybe not overlapping at all. (Authors often hate to hear what fans—and haters—are saying about their work!)

What to talk about

What do you want to connect with other people over? What things do you care about? What sorts of things do you wish you could express yourself about, but nobody around you does?

It’s also best if you can find something that people don’t all agree on. The goal all of this isn’t to be correct, to have the right opinions and gain status that way, but to usefully explore the variations of the topic with other people. For me that’s social justice and how social media should work, and communities (which leads into politics and labor organizing pretty naturally); but it works for tech things too. I prefer author spaces to fan spaces. I’d rather hear how an author thinks about stories than what plot holes fans can identify. I want to hear how fan fiction writers are thinking about their stories, too, and not just people talking about last night’s episode of whatever it is. Your mileage may vary, but do think about what kinds of conversation you want to have. And again: the goal is not to be heard, don’t evaluate that, but to connect: are these people you want to listen to as well?

Find the people talking about that stuff, and get up in their replies. Ask probing questions sometimes. Nothing super invasive or fast paced annoying, but the kind of thing where over the course of a few weeks, you’d have a few repeated interactions with the same people. Follow those people, and the best of the people replying to them. Don’t just hound a single target, look for and join existing conversations: find the things people say, and let them know you heard them. It feels strange that feeling heard involves making others feel heard, but that’s relationship development for you. It’s very reciprocal, even while it’s not transactional.

Then when you find an opinion you have a unique take on, say it! Ideally they’ll be following you by then, or at least some of them. A boost from them gets you into the group you’re hoping to participate in, but if not, don’t worry. This also becomes a history when people look at your profile when you reply to a conversation. They may see something related or relevant to them, and follow you for it. Having an opinion rather than waiting until it’s safe makes that so, so much more likely. In any case, you’re first starting as a nice and kind ‘reply guy’ and eventually maturing into ‘community member’ of that little subgraph. You don’t need to prove your knowledge, and trying to do so will be harmful. Instead, ask good questions, fave posts that are insightful. Boost things that you think are particularly neat. If you have a related thing in an adjacent discipline or fandom or whatever, link them up! That’s the gold that makes other people feel seen. And if you do it, it will start happening to you. And a thread of four or five thoughts is a good length. You don’t need to do epic mega-threads of everything you know, but if you start thinking “this could be a blog post” but not a long one, you’re probably on the right track. Especially if you are intrigued by what you’re writing, not doing it for the attention. Your own care about your own communication shows when others read it.

So then…

My rules are basically “spread out the load, across people and time”, “never fight” and “disagree freely when it matters”, but in the end it’s relationship building, not “expressing yourself”; but ideally those relationships are places where expressing yourself is natural and happy. Relationships don’t need to be strictly equal, but ideally you’ll become a peer of others. You may not have credentials, but you can grow a reputation for caring about something and some people. That’s all that matters.

This should get you started building relationships online and in public. It won’t get you a hive mind of people to give you advice, it’s not how you build a circle of besties (though it might give you some tools and confidence to do so, and the advice to give what you want to get definitely works there too.)

And for what it’s worth, feel free to @ me on Mastodon, especially about this stuff.

Streaming Facebook Live to a Roku (the hard way, but there is no easy way)

This recipe is horrible but it works for me.

You’ll need the streamlink tool, and to know the IP address of your Roku.

In your browser, open the developer console and start the facebook live stream. Look for the .mpd request, which is the MPEG-DASH manifest for the stream, listing all the different quality settings available. Copy the URL.

Use streamlink to tell you what qualities are available:

streamlink `<the url>`

It will tell you something like [cli][info] Available streams: 144p+a66k (worst), 144p+a98k, 144p+a132k, 240p+a66k, 240p+a98k, 240p+a132k, 360p+a66k, 360p+a98k, 360p+a132k, 480p+a66k, 480p+a98k, 480p+a132k (best)

Pick your poison and start streamlink proper in external player mode. I’ve chosen the 480p+a98k stream since the a132k bitrate seemed to only sometimes work for me.

streamlink '<the url>' '480p+a98k' --player-external-http --player-external-http-port 31337

The output will include URLs like this:

[cli][info] Starting server, access with one of:
[cli][info] http://10.243.163.137:31337/
[cli][info] http://10.42.42.66:31337/
[cli][info] http://127.0.0.1:31337/

Choose the one on the same network as your Roku. Now to get the Roku to play it, you will have to urlencode the URL for the stream — replace : with %3A and / with %2F. The middle URL above is included in the command below.

curl -v -X POST "http://<your roku IP>:8060/input/15985?t=v&u=http%3A%2F%2F10.42.42.66%3A31337%2F&videoName=FBLive&k=(null)&videoFormat=mkv"

You should see the Roku flash a starting screen, then retrieve enough of the stream to begin, then play the stream.

how to survive the apocalypse^W^Wa general quarantine

We joke that it’s an apocalypse but it is one. Apocalypse means revelation, and how our world works is being revealed starkly in its inequity and opportunism. This means we’re in a time of change, a time of understanding, and a time of danger. We will have to keep our heads together to make it through this well and healthy. It means getting in touch with our basic human parts and helping each other out, even if that mostly means staying apart.

Read a book. Read several.

Watch a philosophically optimistic television show then watch another one even if you have to use a watching guide to see the best parts.

Learn from the people in your life who’ve grown up on the internet and know a thing or three about connecting with others in ways that don’t involve physical contact often or at all. People get a rush of oxytocin out of skin contact with people they care about, but they also get it just connecting with others. Use those real connections to get your dose.

Put on pants. No, really. Every day. Get up on time, too.

Cook. Cook things that normally require a bit of time, attention, or both.. Learn to make a meal you can produce a bunch of variations on. Eat leftovers. Really.

Multitask less, not more.

Video chat with your friends and family.. Heck, meet some silly strangers and chat with animated selfies. Start a dance party in your house and share it with a stranger.

Offer to help your neighbors. If you have to go to the store, offer to shop for them too.

Join an online community. Remember that every name on that screen is also a real person, also probably missing high quality connection with others. Now is the time to let yourself be your best you and find other people who appreciate who you are. The weirder the parts of yourself you reveal, the more you’re likely to find the people like you. It’s okay to connect over your interests or values or philosophy.

Go outside. Get some sunshine. Take a walk. Take lots of walks.

Make art. Make something. Show off the things you make, especially with people who do similar things.

Remember that news is mostly poison and even when it’s medicine it’s best in small quantities. Ask yourself what you’re getting out of it.

And wash your hands. Seriously.

Community and chat affordances

I’m struggling with a community development phenomenon that’s been going on for a while around me, particularly in tech-centered Slack groups. Something about the affordances of Slack combined with how people like to organize things, ends up making it arranged topically.

There’s a problem here: relationships span topics. It’s not actually a good social schema. There’s clusters of interest broader than that—an #art channel and a #writing channel and a #comics channel all overlap, especially for people who create, not just curate or consume these. If we’re building a space where people want to create, we need to help build relationships within the community that support this. Putting relationships secondary to topic gets in the way.

Even for a simple post, nothing deep, it means that there’s no channel for someone new to pop in and post social relationship building things. An example right now is this silly post about people as types of film— to which I want to post somewhere and say “oh my god, I feel so called out by this, I am totally science fiction and do those thhings”; but in the two communities I’m closest to, that means it’s appropriate in a channel set up for a closed group of friends, or the #scifi channel (which would tend to select for people similar to me, but wouldn’t build much relationship) or … where? Five hundred channels, none appropriate to post in.

That’s something that Mastodon and Twitter excel at to a degree—except that you post it to your followers and your second-degree connections. Relationship building is tied to viral mechanisms. It’s outward-facing, and to get the feedback loop that turns it into community, you have to luck into being boosted a group with enough cohesion to connect back. This happens sometimes, if your followers are cliquish enough. For me, the javascript community works this way; and for talking about social software, Mastodon has lots of people who are interested in that specifically, and who often create things together. It works for that specific interest. A different interest would not gain such traction.

If that connection doesn’t happen, we’re left with watching for boosts and hoping our words were “valuable enough” to an anonymous public. It’s a harmful dynamic to have be the mode, the anonymous posting and being amplified by semi-strangers instead of connection made.

In a social Slack, this goes a very different way, an anxiety-provoking mess. Someone asks “Where should I post this?”:

The answer very often is “I guess #x, #y and #z are good channels,” but that’s not actually the question being asked. The person asking is really trying to figure out “Where will this be received well?” not “where is this on topic?”. And “this” isn’t actually the post, it’s themselves. Where will they be received well? The end result is that everyone is anxious and everything feels like a clique.

I don’t actually think it’s a clique phenomenon. It’s not a preference for existing relationships over new ones with new members, it’s not exclusivity. It’s a problem of social affordances, and actually harming the ability to form new relationships because there is no space that is appropriate for the social grooming and aligning oneself with a group. Topics are too narrow. General chat is too broad unless the whole Slack is narrowly focused and yet active enough to have community, and people’s sense of themself tends to not be aware how contextually they behave. Most people are not aware of most of the code switching they do. If we have to carry a single self-concept everywhere, we keep trying to fit ourselves into social schemas that don’t really fit, and we feel that tension in every interaction.

“If you market to everyone, you market to nobody” is one of those truisms that seems to adapt well to all sorts of social situations because the underlying phenomenon is that marketing is working with a social behavior.

We have to build systems that let us understand group structure and for groups to have space for figuring our our alignment with them. Almost no social software does this, and after the onslaught of spam and then default of hostility on the internet, I suspect none does.

In the early days of the Internet, insularity and homogeneity aside, the wide open access to things by default meant that we had liminal social spaces more easily. You can get close to a crowd, unknown to them, and scope them out. You can observe. They would be be quite public most of the time, having enough psychological safety to exist without self-censoring themselves. A newcomer would be quite anonymous, and you could start participating pseudonymously, if not outright anonymously. You can see how a group reacts to you, you can adjust yourself and join, be accepted, and only then reveal who you are. Now the norm is for personal names, avatars, and outside contact information to be present for a profile to be ‘complete’ enough to participate, which means a fair bit of deciding how to present oneself to a group before being able to observe the norms of the group.

How do we build social spaces that leave more room for get-to-know-you? How can we reduce the prejudgement that comes from presenting a globally consistent face to the world, like individualistic social media does? How can we let people interactively vet the groups they’re joining before they commit? What affordances do we need to understand community from the point of view of a new member?

How do we expose our community values—the real ones, not formally decided official ones—to new and existing members?

You might notice in all of this that there is a tension between safety and functioning as a community. A functioning community means space for vulnerability. This intersects poorly with global hostility, but also with the things we do to avoid this hostility. It means that the walls we put up to keep out hostility are themselves hostile to new people. It puts people in an already vulnerable social position—being new—in the most exposed, vulnerable state in an online community.

These are devilish problems. I don’t have answers yet.

Cultural Zeitgeist and Names

I went to a tai chi class this morning and had a somewhat surreal experience. Right before my class was a kung fu class for young kids (aged 5 mostly), and I overheard a really familiar set of names.

“Get your coat, Aria”

“Eli, time to go”

“Zoë, did you remember your water bottle?”

These are my friends names, aged thirty or so. All of us are trans, and chose our names in the years somewhat near the years these kids were born. It makes me wonder about the cultural zeitgeist that makes this happen. Something in our collective understanding of the world makes us choose names like this. In some traditions, naming after parents or grandparents, a certain age of relative is common. In other groups of people I don’t know what makes them choose the names they do. Maybe it’s avoiding names that are already too prevalent in the culture around them. Maybe it’s famous people around that time.

And in another fun coincidence on the topic today’s XKCD is about cohorts of names over time (and whether or not they will have experienced chicken pox)

Creating your own tiny static publishing platform

I’ve been using static publishing platforms for a while now. The output is enduring and easily archived, and reliable and robust. As an author, there’s also a lot of truth to the unreasonable effectiveness of GitHub browsability however much I disagree with the philosophy therein of committing build products in with the sources. I’ve used Jekyll; Hexo, which is what I use to write this blog; I’ve used Movable Type long ago.

However, all these systems are more complex than I’d like, and prone to bit-rot, far far faster than the content they generate. Runtimes change. Dependencies rot as maintainers move on and no longer can account for those runtime changes. Development moves on to new major versions or being built with a newer fad in software design. Hexo has treated me better than most, but it is large, and the configuration rather arbitrary in places. Plugins have to be written specifically for Hexo, so there’s a balkanized ecosystem that doesn’t flourish as well as other parts do. All these static publishing tools tend to have things in common. Builds have to happen as quickly as they can, and usually this is a bit too slowly. The author will want to preview their work in context, so serving up the rendered pages is important. Live rebuilds by file monitoring reduce friction in the workflow for some people, though I personally don’t care much for it, preferring to run a build when I’m ready.

It turns out that building derived things from a list of inputs with dependencies is a thing that computers have been told to do for a long time. Nearly all compiled software is built this way. We have tools like make(1) and a host of other, more complex and less general tools for various programming languages. I’ve always wondered why we didn’t use those to build sites as well. People have, it turns out, but make(1) in particular is a bit messier for the task than one would hope. There are other tools, and I settled on building with one called tup

This weekend I built a small static publishing platform, and you can too. I wanted to build a site using Tufte CSS, and the minimalism of the presentation is a great fit for a super tiny static publishing platform.

A site like this needs to output:

  • Each post as an HTML file
  • An index page listing posts
  • Its CSS and any assets needed to render

This really isn’t a huge list.

First, let’s reach for a tool that can take a list of files and build all the derived things. make(1) is annoying here, because you have to tell it what to build, and it backtracks and figures out how to make it. We don’t actually have that information easily encoded, but we will have a list of sources, and can make a list of what to do with them. If you’re writing, you probably have a reason for it, right? Or an asset, it’s going to get used, why else would it be there? Starting at the source makes a lot more sense, and as it turns out, it makes incremental builds a lot faster. Enter our first player: tup.

$ brew cask install osxfuse
$ brew install tup

I’m not sure why tup now depends on FUSE, but that’s a task for another day.

Let’s start a directory for our project.

$ mkdir my-static-site
$ npm init
$ mkdir posts

Make a sample markdown file in the posts directory.

Next we create a Tupfile to describe how we’re going to build this site. Then we can just type tup to build the site, or tup monitor on Linux for that live building mode. First, let’s handle each post as HTML. We can use an off the shelf markdown renderer at first.

$ npm install marked

Here’s a Tupfile

: foreach posts/*.md |> marked %f -o %o |> public/%B.html

This means that for each post in the posts directory, we’ll make an equivalent HTML file.

Let’s take a look at some of these rendered files. We’ll need to serve this directory by HTTP if we want to see it as we will on the web.

$ tup
$ npx serve public/

We can now open the site preview at the URL it spits out (usually http://localhost:5000)

Just a directory full of HTML, and ‘full’ is just our one test post, but we should be able to navigate to one. We have a static site, if a lousy one! That HTML is pretty spartan, so let’s add some assets.

Copy the et-book directory of fonts from the Tufte CSS package into the root of the project, and the tufte.css file.

Let’s add a few rules to publish those as part of the site, too. Added to the Tupfile:

: foreach et-book/et-book-bold-line-figures/* |> cp %f %o |> public/%f
: foreach et-book/et-book-display-italic-old-style-figures/* |> cp %f %o |> public/%f
: foreach et-book/et-book-roman-line-figures/* |> cp %f %o |> public/%f
: foreach et-book/et-book-roman-old-style-figures/* |> cp %f %o |> public/%f
: foreach et-book/et-book-semi-bold-old-style-figures/* |> cp %f %o |> public/%f
: foreach *.css |> cp -r %f %o |> public/%b

Run tup again.

The assets got copied in. Now we have to actually put them in the HTML. That’s going to mean templates.

ejs is simple enough and behaves tidily and doesn’t have a lot of dependencies, so let’s use that for output templates.

$ npm install ejs

We’re going to have to create a script to render our markdown and template the file.

Let’s call this render.js:

const marked = require('marked')
const ejs = require('ejs')
const { promisify } = require('util')
const { readFile, writeFile } = require('fs')
const readFileAsync = promisify(readFile)
const writeFileAsync = promisify(writeFile)
const path = require('path')

main.apply(null, process.argv.slice(2)).catch(err => {
console.warn(err)
process.exit(1)
})

async function main(layoutFile, templateFile, postFile, outputFile) {
const layoutP = readFileAsync(layoutFile, 'utf-8')
const templateP = readFileAsync(templateFile, 'utf-8')
const contentP = readFileAsync(postFile, 'utf-8')

const content = marked(await contentP)
const layout = ejs.compile(await layoutP)
const template = ejs.compile(await templateP)

const dest = path.basename(postFile).replace(/\.md$/, '.html')

const body = template({ content, require })
const rendered = layout({ content: body })

await writeFileAsync(outputFile, rendered)
}

It expects two templates: a layout (the skeleton and boilerplate of the page) and a template (the post template). Let’s create those now.

layout.ejs:

<!doctype html>
<html>
<head>
<meta charset='utf-8'>
<link rel='stylesheet' href='tufte.css'>
</head>

<body>
<%- content %>
</body>
</html>

and post.ejs:

<section>
<%- content %>
</section>

And in the Tupfile, let’s replace the marked render with our own. Additionally, let’s tell tup that the HTML depends on the templates, so if those change, we update all the HTML.

: foreach posts/*.md | layout.ejs post.ejs |> node render layout.ejs post.ejs %f %o |> public/%B.html

Let’s run tup again and see the output. Much prettier, right?

Now about that index! The index needs to know the post’s title, and really, posts don’t even have titles yet. Let’s add some to our test post as YAML front matter. Add this at the top of the markdown file.

----
title: My Post
date: 2017-12-04 01:51:43
----

Every post gets a title and the date.

Let’s change our renderer to put the title on the page so we don’t have to reduplicate it.

Install front-matter

$ npm install front-matter

And update render.js

const marked = require('marked')
const ejs = require('ejs')
const { promisify } = require('util')
const { readFile, writeFile } = require('fs')
const readFileAsync = promisify(readFile)
const writeFileAsync = promisify(writeFile)
const frontMatter = require('front-matter')
const path = require('path')

main.apply(null, process.argv.slice(2)).catch(err => {
console.warn(err)
process.exit(1)
})

async function main(layoutFile, templateFile, postFile, outputFile) {
const layoutP = readFileAsync(layoutFile, 'utf-8')
const templateP = readFileAsync(templateFile, 'utf-8')
const contentP = readFileAsync(postFile, 'utf-8')

const post = frontMatter(await contentP)
const content = marked(post.body)
const layout = ejs.compile(await layoutP)
const template = ejs.compile(await templateP)

const dest = path.basename(postFile).replace(/\.md$/, '.html')

const body = template(Object.assign({ }, post.attributes, { content }))
const rendered = layout(Object.assign({ }, post.attributes, { content: body }))

await writeFileAsync(outputFile, rendered)
}

And to post.ejs, the title.

<h1><%= title %></h1>

And in layout.ejs, let’s add a title tag too.

<title><%= title %> — My Blog</title>

Run tup again and let’s check our work.

Now a little harder part. Let’s make the index page.

We’ll need a script to generate it, index.js:

const ejs = require('ejs')
const fm = require('front-matter')
const path = require('path')
const { promisify } = require('util')
const { readFile, writeFile } = require('fs')
const readFileAsync = promisify(readFile)
const writeFileAsync = promisify(writeFile)

main.apply(null, process.argv.slice(2)).catch(err => {
console.warn(err)
process.exit(1)
})

async function main(outputFile, layoutFile, templateFile, ...metadataFiles) {
const tP = readFileAsync(templateFile, 'utf-8')
const lP = readFileAsync(layoutFile, 'utf-8')

const metadata = await Promise.all(
metadataFiles.map(
f => readFileAsync(f, 'utf-8')
.then(fm)
.then(e => Object.assign(e.attributes, { dest: path.basename(f).replace(/\.md$/, '.html')} ))))

metadata.sort((a, b) => {
a = new Date(a.date)
b = new Date(b.date)
return a>b ? -1 : a<b ? 1 : 0
})

const layout = ejs.compile(await lP)
const template = ejs.compile(await tP)

const rendered = layout({
title: 'Posts',
content: template({ metadata })
})

await writeFileAsync(outputFile, rendered)
}

And an index.ejs:

<h1>My Blog</h1>
<section>
<% metadata.forEach(entry => { %>
<p>
<a href='<%= entry.dest %>'><%= entry.title %></a>
</p>
<% }) %>
</section>

And in our Tupfile:

: templates/layout.ejs templates/index.ejs posts/*.md |> node index %o %f |> public/index.html

Run tup once more and we should have a bare-bones site.

Let’s add one more thing before we go, some dates to the posts.

To the template calls in both render.js and index.js, let’s add the require function, so that templates can require their own stuff.Where there’s template({ metadata }), let’s change that to template({ metadata, require })

Then, let’s install fast-strftime.

$ npm install strftime

An expanded index.ejs:

<% const strftime = require('fast-strftime') %>
<h1>My Blog</h1>
<section>
<% metadata.forEach(entry => { %>
<p>
<a href='<%= entry.dest %>'><%= entry.title %></a> <%= date ? strftime('%Y-%m-%d', date) : '' %>
</p>
<% }) %>
</section>

And the page template, post.ejs:

<% const strftime = require('fast-strftime') %>

<h1><%= title %></h1>

<% if (date) { %>
<p>posted <%= strftime('%Y-%m-%d', date) %></p>
<% } %>
<section>
<%- content %>
</section>

Run tup once more, and you’ve got a static site, being generated by some simple code.

Why not Babel?

People always get really enthusiastic about babel.

I get it. Using all of ES6 plus whatever stuff you want to throw at it is cool.

However, consider this:

:; npm i string-tokenize
+ string-tokenize@0.0.6
added 61 packages in 5.212s

:; npm ls
t@1.0.0 /Users/aredridel/Projects/t
└─┬ string-tokenize@0.0.6
├─┬ babel-plugin-transform-object-rest-spread@6.26.0
│ ├── babel-plugin-syntax-object-rest-spread@6.13.0
│ └─┬ babel-runtime@6.26.0
│ ├── core-js@2.5.1 deduped
│ └── regenerator-runtime@0.11.0
├─┬ babel-polyfill@6.26.0
│ ├── babel-runtime@6.26.0 deduped
│ ├── core-js@2.5.1
│ └── regenerator-runtime@0.10.5
├─┬ babel-register@6.26.0
│ ├─┬ babel-core@6.26.0
│ │ ├─┬ babel-code-frame@6.26.0
│ │ │ ├─┬ chalk@1.1.3
│ │ │ │ ├── ansi-styles@2.2.1
│ │ │ │ ├── escape-string-regexp@1.0.5
│ │ │ │ ├─┬ has-ansi@2.0.0
│ │ │ │ │ └── ansi-regex@2.1.1
│ │ │ │ ├─┬ strip-ansi@3.0.1
│ │ │ │ │ └── ansi-regex@2.1.1 deduped
│ │ │ │ └── supports-color@2.0.0
│ │ │ ├── esutils@2.0.2
│ │ │ └── js-tokens@3.0.2
│ │ ├─┬ babel-generator@6.26.0
│ │ │ ├── babel-messages@6.23.0 deduped
│ │ │ ├── babel-runtime@6.26.0 deduped
│ │ │ ├── babel-types@6.26.0 deduped
│ │ │ ├─┬ detect-indent@4.0.0
│ │ │ │ └─┬ repeating@2.0.1
│ │ │ │ └─┬ is-finite@1.0.2
│ │ │ │ └── number-is-nan@1.0.1
│ │ │ ├── jsesc@1.3.0
│ │ │ ├── lodash@4.17.4 deduped
│ │ │ ├── source-map@0.5.7 deduped
│ │ │ └── trim-right@1.0.1
│ │ ├─┬ babel-helpers@6.24.1
│ │ │ ├── babel-runtime@6.26.0 deduped
│ │ │ └── babel-template@6.26.0 deduped
│ │ ├─┬ babel-messages@6.23.0
│ │ │ └── babel-runtime@6.26.0 deduped
│ │ ├── babel-register@6.26.0 deduped
│ │ ├── babel-runtime@6.26.0 deduped
│ │ ├─┬ babel-template@6.26.0
│ │ │ ├── babel-runtime@6.26.0 deduped
│ │ │ ├── babel-traverse@6.26.0 deduped
│ │ │ ├── babel-types@6.26.0 deduped
│ │ │ ├── babylon@6.18.0 deduped
│ │ │ └── lodash@4.17.4 deduped
│ │ ├─┬ babel-traverse@6.26.0
│ │ │ ├── babel-code-frame@6.26.0 deduped
│ │ │ ├── babel-messages@6.23.0 deduped
│ │ │ ├── babel-runtime@6.26.0 deduped
│ │ │ ├── babel-types@6.26.0 deduped
│ │ │ ├── babylon@6.18.0 deduped
│ │ │ ├── debug@2.6.9 deduped
│ │ │ ├── globals@9.18.0
│ │ │ ├─┬ invariant@2.2.2
│ │ │ │ └─┬ loose-envify@1.3.1
│ │ │ │ └── js-tokens@3.0.2 deduped
│ │ │ └── lodash@4.17.4 deduped
│ │ ├─┬ babel-types@6.26.0
│ │ │ ├── babel-runtime@6.26.0 deduped
│ │ │ ├── esutils@2.0.2 deduped
│ │ │ ├── lodash@4.17.4 deduped
│ │ │ └── to-fast-properties@1.0.3
│ │ ├── babylon@6.18.0
│ │ ├── convert-source-map@1.5.1
│ │ ├─┬ debug@2.6.9
│ │ │ └── ms@2.0.0
│ │ ├── json5@0.5.1
│ │ ├── lodash@4.17.4 deduped
│ │ ├─┬ minimatch@3.0.4
│ │ │ └─┬ brace-expansion@1.1.8
│ │ │ ├── balanced-match@1.0.0
│ │ │ └── concat-map@0.0.1
│ │ ├── path-is-absolute@1.0.1
│ │ ├── private@0.1.8
│ │ ├── slash@1.0.0
│ │ └── source-map@0.5.7 deduped
│ ├── babel-runtime@6.26.0 deduped
│ ├── core-js@2.5.1 deduped
│ ├─┬ home-or-tmp@2.0.0
│ │ ├── os-homedir@1.0.2
│ │ └── os-tmpdir@1.0.2
│ ├── lodash@4.17.4
│ ├─┬ mkdirp@0.5.1
│ │ └── minimist@0.0.8
│ └── source-map-support@0.4.18 deduped
├─┬ chai@3.5.0
│ ├── assertion-error@1.0.2
│ ├─┬ deep-eql@0.1.3
│ │ └── type-detect@0.1.1
│ └── type-detect@1.0.0
└─┬ source-map-support@0.4.18
└── source-map@0.5.7

:; npm rm string-tokenize
removed 61 packages in 1.356s

:; npm i @aredridel/string-tokenize
+ @aredridel/string-tokenize@1.0.0
added 1 package in 1.317s

:; npm ls
t@1.0.0 /Users/aredridel/Projects/t
└── @aredridel/string-tokenize@1.0.0

This is roughly the same code. I ported it to not be written using ES6 Modules, used core node assert instead of chai (It has the same functionality being used!), and removed Flow type annotations. It works in node 8 easily, and should work in node 4.

I work in constrained environments: page load time is very important to me. If I’m loading even a fraction of this in a browser, I’ve blown my budget. I run a bunch of hobby projects on a very inexpensive server. RAM is at a premium. All of these things have costs.

A small failure, tracing complex causes, and the ethics of software design

Today I had an interview; not the super intense kind, but grab coffee with a recruiter, chat about goals and desires and see if companies she represents are a match for my skillset.

I missed the appointment. There’s myriad reasons, including my being bad with dates and time in general, but I’ve got a system that usually works for me. I delegate carefully to my computer and set every appointment to vibrate my phone. However, this particular event was vexed from the beginning by a several failures, each individually insufficient to make me miss the appointment, but together did the job nicely.

We chatted briefly the other day to set up the appointment. I put it in my calendar, and she sent me a calendar invite via Google calendar to my personal email address (which is not a gmail account, however it is the email I sign in to Google with). Failure number one: There’s no way not to have a google calendar, so Google auto-added it to my calendar there, and I strongly suspect events are considered somewhat confirmed (at least receipt of invite shown) at this point.

Her invite had more information (like location) than my hand-entered entry, so I opened the .ics file that Google emails to my address when someone sends me a Google calendar invite. I use Apple’s Calendar app, since it’s got a much faster user interface than Google calendar, and syncs with iCloud quite nicely. It’s a tiny bit more in my control than Google is. When I opened the .ics file, it added it to my calendar on the screen, and I deleted my copy of the event.

Failure number two: events added from an .ics file sent by Google can’t be edited. Including the ability to set up a notification.

Failure the third: immediately after, I get a message from the Calendar app that it couldn’t sync the event, error “403” (HTTP for “Forbidden”, which in this case tells me about as much as the word “potato”). Apple has chosen a protocol called CalDAV for its calendars, and has not put effort into making sure all the error messages are meaningful. It then presents me with three opaque options: “Retry”, “Ignore” and “Revert to Server”. The first fails with 403 again. The second will leave the entry on my computer, but not sync it to iCloud, and I only know this from a little experimentation and knowledge of how these systems work under the hood. The third removes the entry from my calendar. Failure the fourth: none of these options are useful. I eventually ignore the error and set about making it work right.

I copy and paste the event to another calendar in the Calendar app. This time it works, and I copy it back to the correct calendar, the one I have set up to sync to iCloud and my phone. It works. Or so it seems. I move on with my day. I have an event in Calendar, that hasn’t given me a sync error, that has a notification, and the time, date and location of my meeting. It does, however, try to send an invite to the recruiter who invited me, making a second meeting at the exact same time and place. I decline to do so. Points to Apple for giving me the option.

This morning I wake up, glance at my phone’s calendar, see I have no events until afternoon, and sleep late. I miss my appointment.

Failure the fifth: It turns out, that appointment didn’t sync to the phone. I had checked the original, hand-entered appointment, since I’m insecure about calendars, but that one got deleted way up at the start of this fiasco. The app I use to synchronize an android phone with an iCloud calendar is, while a little ugly in the user interface is a normally robust piece of software that has not betrayed me, until today. There was no error, and so I don’t know whether this event didn’t sync fully in some way or whether the sync program is broken even though it shows my event later in the day. It shows on my husband’s phone, who subscribes directly via iCloud since it’s an Apple device. It made it to Apple’s servers.

Failure the sixth: My computer froze last night, and so, it also did not show any hint that I might have an event today.

All in all, I missed a relatively trivial event. However, if this had been a later interview, this may well have cost me a job. This is where the ethics of software design come in. These are all failures of engineering, and some of them quite forseeable. Software must plan to have bugs, to fail gracefully. The failure case here was silent, and may well be costly to users who experience it. However, at the end of the day, there is no accountability: aside from the chance they read this blog post, engineers at Apple and Google will never know about this failure. I have no options for managing this data that do not involve third parties short of hand-entering calendar entries into multiple devices.

There were also number of preventable failures, mostly in the design of these pieces of software.

  • Why can I not have a Google Calendar, and interact with Google Calendar users entirely by email? They seem awfully certain I’ve received invites when I have not, though in this case that part worked out.
  • Apple’s engineers did not account for getting error messages to humans, and so we end up with opaque, low-level errors like “403” with no meaning and no way to correct whatever condition caused them. We just guess at what might be wrong and try to act accordingly. I may well have guessed wrong.
  • Apple’s calendar program is not designed as a distributed system. It assumes networks are reliable, bugs do not exist, and that errors are transient. The reality is that none of these things are true. Its design does not expose details of what it’s doing, does not expose the state of sync clearly, and does not let you inspect what’s going on. It sweeps its design flaws under a very pleasant user interface rug.
  • Google’s dominance of the industry has left users with few working alternatives, and its products do the bare minimum to interoperate, if at all, and usually only when Google owns the server portion. Their calendar application on my phone does not speak the standard protocols used by Apple.
  • Apple’s extensions to CalDAV with push notifications for added events are also private, and third-party applications cannot use those features.
  • None of these applications center the user’s agency and let them make a fallback plan when these services fail, and these services do fail, often silently.

My needs are modest: enter events in calendar on whichever device I’m using, particularly the ones with good keyboards. Have my phone tell me where I need to be.

Modes of analysis that surface these kinds of design and user experience issues are central to designing good applications. It’s highly technical work, requiring the expertise of engineers and designers, especially as evaluating potential solutions to these design problems is part of the task.

Centering ethics in the design would have changed the approach most of these engineers took in the design of these applications. Error messages would have been a focus. A mode for working when the network is down or server is misbehaving may have been created. A trail of accountability to diagnose the failure would have been built. Buzzwords like ‘user agency’ aren’t just words in UX design textbooks (though they should be), but the core of the reason software exists. Engineering that centers its users, analyzes their needs, and evaluates the ways potential solutions fail and solves those problems is what engineering should be.

My apologies to the recruiter I stood up today, I hope you enjoyed a latte without me, and talk to you Monday.

Radical Modularity

Here’s a question: What if everything were a module?

This post is derived from a talk I gave at Web Rebels 2016.

What is a module?

I’m actually going to spend some time on this one because while it’s an everyday word in our industry, it’s one we don’t often hear defined. I want you to think about what it means to make software modular.

A module is a bit of software that has an interface defined between it and the rest of the system.

This is one of the simplest definitions I could come up with. There are some implications here: There’s a separation between the module and the rest of the system. I’m not saying how far, but it’s actually a separate entity. I will get into what “interface” can mean later in this post. The bit I think is really interesting is the word defined. This means we’ve made decisions in making a module. Extracting something blindly into a separate file probably only counts on a technicality. Defining something is intentional.

I’m a programmer working for a package manager company, and I think of code as art, and I’ve been making open source my entire professional life, so I’ve also got a particular bunch of things I also mean when I say module.

A piece of software with a defined interface and a name that can be shared with or exposed to others.

I won’t advocate sharing everything, since I’m talking about radical modularity and not radical transparency here, but I want the option. The rest, though, are where things get interesting. In particular, I want to talk about names.

When we name something, it takes on a life of its own. It’s now an object in its own right. This happens when we name a file, it happens when we name a package. A name is the handle we can grab onto something with mentally and start treating it independently.

A defined interface is the first step of independence. It’s the boundary that gives a thing a separate internal life and external life. Things outside a module get a relationship with the boundary, and inside the module, anything not exposed by the boundary can be re-arranged and edited without changing those relationships.

I named her. The power of a name. That’s old magic. —Tenth Doctor, “The Shakespeare Code”

Not every module even gets published or becomes a package on a package registry like npm or crates. We usually push things to GitHub early, but source control isn’t quite the same thing as publishing things for others to use. Just separating things into a separate file — there’s the naming — and choosing what constitutes the interface to the rest of the system is modularizing.

We can commit to names more firmly by publishing and giving version numbers, and breathe life into something as a fully separate entity, but that’s not required, and that alone isn’t often enough to make a whole project.

Self-sustaining open source projects have to be bigger than tiny modules, and so you can either enlarge modules until they become self-sustaining, or your project is a group of related modules, like Hoodie, where there are a bunch of small, named parts.

There’s another option, which is to make modules so small they trend toward finished, done, names for a now-static thing. Maybe they are bestowed upon someone who tidies them up, finishes a few pieces that we left ragged, maybe just left in a little library box for someone else to discover. Maybe they’re published, maybe they’re widely used and loved, maybe not. Maybe they end up in a scrap heap for our later selves or others to build something new from.

Art does not reproduce what is visible; it makes things visible. —Paul Klee

Something the open source movement did that isn’t all that widely acknowledged is make a huge ecosystem for the performance of software as a social art. Not only that, but since then, the explosion of social openness in the creation of software has created a new, orthogonal movement less concerned with copyright law and open engineering, but open sharing of knowledge and techniques, and as a side effect of that and the rise of the app, software engineering now includes the practice of software as art and craft.

I practice code as an art.

A good portion of that is making concepts visible and ultimately that often means making it named. With art, though, there’s some tension with engineering: sometimes we do things to show instability, to test a limit, or to reveal the tensions within our culture or systems we build. We can create a module of code only to abandon it once it‘s served its cultural purpose — be it connecting two things together, mashup style, or just moving on because there’s a better way to do things.

One of the interesting differences about software artifacts as a medium of art contrasted with other fine arts is that despite working in a very definite medium, though abstract, much of what we make is never finished. It exists in our culture — yes, software creation is a reflection of our culture, and a culture of its own — and as web workers, especially working as artists, a lot of what we create straddles the lines of engineering, fine art, performance art, and craft.

Sometimes, too, a destructive act can be an artisic act: in the unpublishing of left-pad, Azer Bike revealed that some of us have been choosing dependencies with little thought, and revealed just how interdependent we are with each other when we work in the open and rely on each other’s software.

So it goes that even the biggest pieces of software are made up of smaller parts. It’s the natural way to approach problems and make them solvable. A large module is nothing more a collective name for little modules that may not have their full names and final forms.

I really like small modules as a norm, because I think of things in terms of named objects. I’m happy to abandon a thing I think no longer suits, and it’s easier to abandon a small module than a big one. They approach done, so I’m happy to use a three year old tiny module, but a big project that’s three years unmaintained is likely to be bug-ridden and poorly integrated.

Back to the thesis here:

What if we make everything a module?

What happens when we break off pieces and name them well? What happens when we do and when we can, publish them, share those names and let others wield that power over them? What does this do to our culture as programmers?

Practical approach to building modularity

I talk about npm a lot, but this can be extended: open your mind and projects and think about making interfaces around new things.

It’s quite possible to take the module system that node uses and extend it to new contexts, and as we’ve seen with projects like browserify, it’s possible to keep the same abstract interface but package things up in new ways for contexts they were not designed for.

Modularizing CSS

When I started at npm we had a monolith of old CSS built up, like most web projects start accreting – a lot of styles we weren’t positive were unused, a lot of pieces and parts that depended on each other. Since then and with a huge shout-out to Nicole Sullivan and her huge body of work on this, in particular go watch her talk or read “Our best practices are killing us”, we’ve started tearing apart the site and rebuilding it with, get this, modules of CSS, with defined interfaces between them and the rest of the system.

They all have names — package names — and versions. So we’ll have a module like @npmcorp/pui-css-tables (PUI is because we forked this system from a component system used at Pivotal Labs)

In this case we’re using a tool called dr-frankenstyle. It’s pretty simple. It looks at all the node modules installed in our web project and then concatenates any CSS they export with a style property in the package metadata, in dependency order.

This means our CSS actually has dependencies annotated into it, in the package metadata, and it’s in pieces and parts. Because of this, and because it’s named, we can start grappling with these things individually, and start making sense of what otherwise becomes a huge mess.

There’s another project called atomify-css that can do similar things, and both of these systems will do one set of fixups as they build a final CSS stylesheet: they identify assets that those stylesheets refer to, and copy those over and adjust the path to work in the new context. Atomify in particular has modules for several languages that bring this style of name resolution.

This turns out to be super powerful, because now it leads us into wanting to modularize and make explicit the dependencies between all the things.

Now, CSS has some pitfalls: browsers still see all of a page’s CSS as a single namespace, a single heap of rules to apply. This isn’t a clean interface, so modularizing everything doesn’t automatically solve all your problems. It can give us some new tools though.

Modularizing SVG

<svg>
<use xlink:href="./wonky.svg#camera-lens"/>
</svg>

What happens if we put SVG files into packages? What’s the interface to an SVG? The text? Parsed XML? Just the file name?

<svg>
<use xlink:href="@stoppard/hound/props/chocolates.svg#chateu-neuf-de-pape"/>
<use xlink:href="./wonky.svg#camera-lens"/>
</svg>

We’ve got dependencies. SVG files can load other SVG files. xlink attributes could be followed, and postprocessing tools could inline those, making production-ready and browser-ready SVGs from more modular ones.

Now that we have HTML5 support in browsers, we can embed SVG directly into HTML, too.

That brings me to…

Modularizing Templates & Helpers

We don’t just build raw HTML anymore, but we use templating systems to break those apart. What if we published those as packages for others to consume, what if we made them modules?

In the process of reworking npm’s website, we had components of CSS whose interface is the HTML required to invoke them: A media block has a left or right bit of image, and then some text alongside. A more sophisticated component might string several of those together, add a heading banner and some icons. The HTML required was getting complex and fragile, so that any change to the component would require the consumer to update the HTML to match. Icons were inlined, so changing an icon would mean editing a large blob of SVG.

In semantic versioning terms, every change became a major. While integers are free, time to check what’s needed to update isn’t, so this wasn’t going to be a scalable approach.

<div class="a-component"> 
<div class="media-block">
<div class="media-left">
<svg> ... icon here </svg> ...
</div>
</div>
</div>

becomes

{{fromPackage "@a-team/pity-the-media" .}}

We started moving the handlebars templates we use that have the HTML to invoke a component on our website into the modules. This moved the interface boundary into something more stable. Now we can change what that component needs and does as the design evolves without having to go propagate those changes to an unknown number of callers.

You’ll remember I mentioned SVG icons. It turns out that inlining small icons is one of the most efficient ways to use them, but it doesn’t scale very well in the development process. The alternatives, icon fonts, require a lot of infrastructure and are brittle enough that it stifles the act of moving things into a module. Icons have to be in large groups with that approach, and that trends toward very large modules, and probably to less efficient ways to do things.

What I ended up doing was making a small handlebars helper like the fromPackage helper I just showed the call to, and made a couple helpers for loading SVG from packages. Called from our handlebars templates, a single helper invocation can load and parse SVG from a package, do simple optimizations, and cache the result, and inline it. SVGs, too, then, became modules we can publish separately or in small groups.

A bit of an aside:

React Changed Everything

There is a reason that React talks have been so popular for the last couple years. It really did make a radical shift in how we design frameworks, and more importantly to me, it helped give components better interfaces. Stateless components have well defined inputs and outputs. Side effects are reduced or eliminated. This means modules can more easily declare their dependencies and give simple interfaces.

This also means that React components fit into packages really neatly, and automatically give an interface that’s like a function call. If you’re a react programmer, you’ll probably recognize my fromPackage helper as very similar to node’s require, which is how most of us use React these days, as webpacked or browerified modules.

What can we steal from React?

That modularity and clear boundary on interfaces changed so much. Let’s re-think how we integrate things to have interfaces that simple and clean. There’s been a lot of experiments, too, on having react components automatically namespace CSS they require, and then emitting HTML that uses the namespaced version. By moving the module boundary from the raw CSS to something that gets called, an active process, CSS namespacing woes can be solved by separating what the humans type from what the browser interprets a little bit.

How radically can we change the complexity of an API by changing what kind of thing we export?

What else can be a module?

At PayPal, I did work to make translations be separately loadable things, which leads rapidly into separating those pieces into entirely separate packages with their own maintenance lifecycle. When you have a separate team working on something, having a clean boundary can be a great way to let work progress at a more independent pace. What else can we modularize?

That last one is really interesting. Kyle Mitchell is a lawyer who uses a lot of software in his work to draft legal texts. In so doing, he’s published a lot of tiny modules of interesting stuff. Mostly they’re JSON files, cleanly licensed and versioned, or small tools for assembling legal language out of smaller pieces, re-using tested and tried phrasings of things. Sounds familiar, right?

Text itself can be a module with an interface, even if that interface is concatenation of a bit of text.

We can even make modules that are nothing but known good configurations of other modules, combined and tested.

Making Modularity

Hands on!

A lot of this is going to be specific to node, but I like node not just because it’s JavaScript and I think JavaScript is a lot of fun, but because its dependency model is actually pretty unique. That’s actually a lot of what drove me to node in the first place.

The underappreciated feature of node modules is that they nest — this really bugs windows users since their tools can’t deal with deep paths — and this means that we can have a module get a known version of something, defined entirely on its terms, which means that what a package depends on is either a less-important part of or not even a part of the interface to a module. We spend a lot less time wrangling which versions of things all have to be available at once, and we can start putting dependencies behind the boundary that a module defines.

Most of what I build is built on @substack’s resolve module

resolve.sync('@mad-science/luncheon-meats/baloney.svg')

give me the file baloney.svg from the @mad-science/luncheon-meats package

This is a really simple module that implements node’s module lookup strategy for files that aren’t javascript. You can say “give me the file baloney.svg from the @mad-science/luncheon-meats package”, and it will find it, no matter where it got installed into the tree — remember node modules let you share implementations if two things require compatible versions of a module — and we name the file, in this case the actual interface of this hypothetical module is to just read the file once you figure out where it is.

That’s our primitive building block. I like this one because it matches how everything else in the runtime I use most works.

There’s another thing that’s common to do: Add a new field to package.json. Dr. Frankenstyle uses the property style rather than main to say which file is the entry point to the package. This means that modules can do dual-duty: grouping different aspects of a thing together into a single component, rather than making the caller assemble the pieces when the pieces all go together anyway.

One of the things I ask when building interfaces in general, and module systems in particular is “how many guarantees can we make?”

  • Dependencies isolated
  • Deduplication
  • Local paths are relative to the file
  • Single entry point

One of the guarantees I love most is that local paths in node modules are relative to the file. This is one of the ways that make it possible to break things into modules without breaking down the interface they had as a monolithic unit. It really makes me sad that most templating languages don’t maintain filenames deep enough into their internals to implement this. It’s good for source mapping and it’s good for modularity.

A lot of people fought this – they keep fighting this in node development, but I think it exposes how people think about modularity: This is a symptom of making the whole project a single module and giving the components just enough name to navigate but not enough that they can live on their own.

I keep building similar models. I often make a path rewrites for resources, so that things relative to the file where it’s actually stored will work when loaded into a new context where it’s used. Sometimes that’s inlining. Sometimes that’s copying modules into a destination and making sure their assets come along for the ride.

This is replicating the guarantees of node’s module system, because they give me some flexibility and durability in what I make. If things have their own namespaces, their own dependencies, then I can break them less often or not at all.

Going Meta

If everything’s a module, what can we do with that?

Have we simplified things enough to start giving our programs the vocabulary to start extending themselves? Can we start talking about constructing programs out of larger building blocks, even if they’re sometimes special purpose?

Can modules or remotely loaded packages be first-class objects in our programs?

What about generating new modules in the course of using our programs, and letting our users share them?

What other kinds of interfaces can we give. Web services? Data sets with guarantees about how they’ll develop in the future?

How radically can we simplify the interface of something?

One of the most influential concepts in my career was that phrase a lot of us have heard about UNIX: everything is a file. Now, that’s a damn lie. There’s a lot of things in unix that aren’t files at all. IPC. System calls. Locks. Lots of things can be file descriptors, like sockets, but if you want to see more things shoehorned into that interface, you could go install Plan 9, but there’s not very much software out there for Plan 9.

Even so, UNIX took off in a huge way thanks to a bunch of factors, and even Plan 9 and Inferno and systems derived from it have this really outsized longevity in our minds because of one thing: They simplfied their interfaces. Radically.

They defined their interfaces so simply that you can sum them up in a few words.

“Text file. Delimited by colons.”

“Line delimited log entries.”

“Just a plain file, a sequence of bytes”

These are super durable primitives. They had all their edge cases shaved off. No record sizing to write a file no special knowledge of what bytes were allowed or not. Very few things imbued with special meaning.

This means these systems last because they give us building blocks to build better things out of.

I love to pick on unix because for all its ancient cruft there’s an elegant system inside. It’s not the only super simple interface that really took off either.

Chris Neukirchen made the rack library for ruby, and little did they know but it suddenly got adopted by all the frameworks and all the servers because at the core of it, a web request got simplified down to a single function call: environment in, response code, headers, and body out. It’s adapted from the Python WSGI but it was a great distillation of the concepts.

node modules also have this ridiculously simple interface. They get wrapped in a function with five parameters, and are provided a place to put their exports and a require function for their own use. It turned out to be a great thing to build even more complicated module systems out of.

node streams, too. By making them pretty generic, it turns out that thousands and thousands of packages all use the shared interface and all work together.

It’s really worthwhile asking yourself if there’s a radically simplified, generic interface that your module is begging for.

Tnank you.

git commit messages

My current thoughts on commit messages.

First, we had change annotations, as descriptions of what changed:

fixed bug in display code

or

improved caching behavior for edge case

My first objection to these is that commits are not always past tense. In a world of CVS and Subversion, they are: reworking and recommitting things is far too much work, but this is git. They are not just a record of what we did, but they are actual objects that we are going to talk about, they are proposals and often they are speculative. git is an editor.

It doesn’t feel particularly natural to be more descriptive here because we’re basically adding labels to a timeline. If we do get descriptive here, it’ll be as sentence fragments awkwardly broken up into bullet lists at best, and talking more about what we did than why we did it. Let’s talk about them in the present tense:

fixes bug in display code

case where display list is null

or

improves caching behavior for edge case

sometimes we write the empty entry first

A step in the right direction. Those start looking like objects we are going to talk about. However, they don’t make a lot of sense without context. Commits come with only two pieces of context: their parent commit, and the tree state they refer to.

These messages assume context in a way that leads to spelunking in the history later will not necessarily find. fixes bug implies there was a bug to fix, but not much about it. We still are talking more about history than about what we changed. One has to compare the states before and after, and there’s not a lot of incentive in this format to continue and describe the bug. The context is assumed. In talking about these commits, we’d say things like “this commit deadbeef was the problem”. We don’t really refer to the commit so much as the state it brings, and even then only weakly, in the form of what’s different about that state from previous, not what it is.

We can describe a little more but we’re still describing what we’re doing and not the state of the world.

In a world where we may rebase them, move them around and combine them, something a little more durable needs to happen. Let’s treat commit titles as names.

fix for bug in display code

a replacement handler for case with empty display list causing corruption
of the viewport

or

improvement in caching behavior for edge case

a check to skip writing empty entries in the cache, preventing the case
where empty entries would be returned instead of a cache miss.

Now the description we’ve left out starts feeling obvious. Now I want to know more about this bug, I want to know more about the fix, and I want to know about this improvement. These are nouns, and we have a lot of language for describing nouns.

These make sense even if rebased, and if we were to read the source code associated with this change, we would find that this describes the code added and removed, not the change from some unknown previous state. We know almost everything about the contents of this commit without having to infer it from context, and discussing it as the actual code becomes much easier. Code reviews can be improved, and we can refer to these commit hashes (or URLs) as objects and refer to them meaningfully later. “This improvement was very good”, or “this improvement introduced a bug”

Now we have objects to talk about, and detail about the state that differentiates it from other states, even without being directly attached to the history. With the need for context reduced, we can now use these commit messages in new contexts without rewording them. We add some tags with some machine-readable semantics: Tools like conventional-changelog-cli can generate change logs for summary to a user and semantic-release can bump version numbers in meaningful ways, dependent on the changes being released. We’ve pushed that decision out to the edges of the system, where all the context for doing it right lives. The result:

fix: bug in display code

a replacement handler for case with empty display list causing corruption
of the viewport.

and

fix: improvement in caching behavior for edge case

a check to skip writing empty entries in the cache, preventing the case
where empty entries would be returned instead of a cache miss.

BREAKING CHANGE: empty cache entries are not saved so negative caching
must be handled in another layer.

And in changelog format:

v2.0.0 (2016-04-16)

  • fix: bug in display code 886a50c
  • fix: improvement in caching behavior for edge case 9bce4c5

BREAKING CHANGE

  • empty cache entries are not saved so negative caching must be handled in another layer.

This is super useful, but I think the context reducing style of commit message is a good prerequisite for actually getting good change logs that make sense.

A side note. I think github’s new squash and merge feature is going to be the perfect place for this style: individual commits are often not quite the right granularity for tagging. The style notes here apply otherwise, but tags I think are most useful on a merge-by-merge basis.

In the absence of squashing, a change to conventional-changelog that only looked at merge commits would be excellent, leaving the small state changes visible for code review, but the merges visible as external changes in the log.

Why MVC doesn't fit the browser

In part one of this series I talk about Why MVC doesn’t fit the web from the point of view of writing web services, in the vein of Ruby on Rails and Express. This time I’m continuing that rant aimed at the modern GUI: The Browser.

MVC originated from the same systems research that gave rise to Smalltalk, which then had ideas imported into Ruby and Objective C that we use today. The first mention of an MVC pattern that I’m aware of was part of the original specifications for the Dynabook – a vision that has still not been realized in full, but that laid out a fairly complete vision for what personal computing could look like, a system that any user can modify and adjust. The software industry owes a great deal to some of this visionary work, and many concepts we take for granted today like object oriented programming came out of this research and proposal.

The biggest part of the organizational pattern is that the model is the ‘pure ideal’ of the thing at hand – one of the canonical examples is a CAD model for an engineering drawing: the model represents the part in terms of inches and parts and engineering terms, not pixels or voxels or more specific representations used for display

The View classes read that model and display it. Its major components are in terms of windows and displays and pixels or the actual primitives used to display the model. In that canonical CAD application, a view would be a rendered view, whether wire-frame or shaded or parts list data displayed from that model.

The way the two talk is usually that the model emits an event saying that it changed, and the view re-reads and re-displays. This lives on today in systems like React, where the pure model, the ‘state’, when it updates, triggers the view to redraw. It’s a very good pattern, and the directed flow from model to view really helps keep the design of the system from turning into a synchronization problem.

In a 1980’s CAD app, you might have a command-line that tells the model to add a part, or maybe a mouse operating some pretty limited widgets on screen, usually separate from the view window. Where there is interaction directly on the view, the controller might look up in the view what part of the model got clicked, but it’s very thin interface.

That’s classic MVC.

To sum up: separate the model logic that operates in terms of the business domain, the actual point of the system, and don’t tie it to the specifics of the view system. This leaves you with a flexible design where adding features later that interpret that information differently is less difficult – imagine adding printing or pen plotting to that CAD application if it were stored only as render buffers!

Last we come to controllers. Controllers are the trickiest part, because We Don’t Do That Anymore. There are vestigial bits of a pure controller in some web frameworks, and certainly inside the browser. Individual elements like an input or text area are most recognizable. The model is a simple string: the contents of the field. The view is the binding to the display, the render buffers and text rendering; the controller is the input binding – while the field has focus, any keyboard input can be directed through something that is written much like a classic controller, and updates the model at the position in the associated state. In systems dealing with detached, not-on-screen hardware input devices, there’s certainly a component that directs input into the system. We see this with game controllers, and even the virtual controllers on-screen on phones emulate this model, since the input is usually somewhat detached from the view.

In modern web frameworks, you’ll find a recognizable model in most if not all. Backbone did this, giving a structured base class to work from, since it is commonly mapped to a REST API in the form of its Backbone.Model class. Angular does this with the service layer, a pretty structured approach to “model”. In a great many systems, the model is the ‘everything else’, the actual system that you’re building a view on top of.

Views are usually templates, but often have binding code, read from the model, format it, make some DOM elements (using the template) and substitute it in, or do virtual DOM update tricks like React does. Backbone.View is an actual class that can render templates or do any other DOM munging to display its model, and can bind to change events in a Backbone.Model; React components, too, are very much like the classic MVC View, in that they react to model or state updates to propagate their display adaptation out to the viewer.

The major difference from MVC comes in event handling. The DOM, in the large, is deeply unfriendly to the concept of a controller. We have a lot of systems that vaguely resemble one if you squint right: navigation control input and initial state from the URL in a router; key bindings often look a lot like a controller. To make a classic MVC Controller, though, input would have to be routed to a central component that then updates models and configures views; this split rarely exists cleanly in practice, and we end up with event handlers all directly modifying model properties, which reflect their state outward into views and templates.

We could wrap and layer things sufficiently to make such a system, but in the guise of ideological purity, we would have lost any simplicity our system had to begin with, and in the case of browsers and the web, we would be completely ivorced from native browser behavior, reinventing everything, and losing any ability to gracefully degrade without javascript.

We need – and have started to create – new patterns. Model-View-ViewModel, Flux, Redux, routers, and functional-reactive approaches are all great ways to consider structuring new applications. We’re deeply integrating interactivity, elements and controls are not just clickable and controllable with a keyboard, but with touch input, pen input, eye-tracking and gesture input. It’s time to keep a critical eye on the patterns we develop and continue to have the conversations about what patterns suit what applications.