This is going to be a little bit oblique because I view the world somewhat differently than many people: I take an ecological view of the world, in systems and connections not just with rules. Feedback loops matter so very much, not just statistics. Statistics are a snapshot. When something disrupts an ecosystem, everything moves around. Some things die out. New things take over - sometimes things that were there before but held back by something that’s now gone. Everything changes.
Everything is changing right now.
I believe that purist approaches to mitigating things are rarely useful. They are purely trying to return to a past that has already gone, or never existed in the first place. Our world is an ecology of ideas and actions and interconnected systems, and they can’t be spun backward to get to some more pure, earlier state. The real world is non-linear, full of feedback loops and pitfalls. Applying the brake as hard as you can won’t stop you when there’s a heavy engine and a million hands pushing you toward a cliff. You have to steer. And if things are really bad, you have to choose where to crash, because going off the cliff is the worst option.
We have to ask ourselves - and this will be ongoing - what regulates what’s going on?
Laws matter, but laws are only as powerful as our collective belief in their working. It would be nice to live in a world of laws again, assuming the laws are vaguely just.
Just as much the AI boosters and techno-optimists are wrong: progress is written in blood, usually of the outclasses. There is no flywheel of progress that leads to unambiguously better outcomes for everyone. It looks like it always has because there are 8.3 billion of us, largely trying to steer our minute little bits of the world toward something better. Sometimes we even have succcess. It’s rather easy to look back and say “Well that was easy! We were always headed to where we ended up.”
When the year 2000 was looming, a low-key panic set in among certain groups of people: those distrustful of computer technology and its wielders, infrastructure planners who understood just how fragile things can be, and how every sufficiently complex system rapidly heads for uncharted territory and untried configurations. A princely sum was doled out to consulting firm after consulting firm, poring over often now-obscure legacy systems and patching in hacks to deal with the move from two digits to four, or from 99 to 00. Any given case is pretty simple, but the sum of embedded understanding of dates and times was quite an undertaking. A lot of code was fixed. Some of it was, to be certain, critical. The work was done, largely, and when the clock rolled over, a few people breathed a sigh of relief, a few more added another bit of justification of their distrust of computing technology, and the rest of the world, if they noticed at all, noticed the power was still on and that the party could continue. The millennium (or at least its penultimate year) came to a close with little fanfare, the efforts rewarded only quietly, and like all avoided catastrophes, with some complaints about the size of the bill because nothing happened and it really wasn’t such a big deal after all, was it? The retrospective view makes it so easy to ignore the actual toil that it takes to make the world work.
Of Centaurs and Centipedes
There is a term, from automated chess-playing systems, the “human centaur”. It refers to a person in control of playing the game, but using computer assistance to play. That was imported into automation theory, meaning a system where a human is in a position of authority and management of underlying automated processes. Cory Doctorow helped bring it from automation theory to specifically talking about AI systems but in particular, is calling out the reverse arrangement: humans as subjects of an inhuman and inhumane system that needs our fragile meat-fingers to do something in the real world for it.
I think in the cases where it’s not about our physicality, but our attention, there’s a very real risk of being hooked up to the metaphorical slop in an arrangement that resembles a partially-automation human centipede. A dehumanizing flood of low-value knowledge work. We know this happens because it already happens. Amazon’s “Mechanical Turk” has been doing this for ages. The “AI” boom is enabled by the collective trauma of Kenyan content moderators and image labelers. Almost all of this work is gig work. Almost all of it precarious. Almost all of it by outclasses that are distinguished by skin color and location, accidents of their situation, and the remnants of colonial force of the last century leaving a hard to change pattern in our world.
Charlie Stross said “Corporations are just slow AI”, and I largely stand by this: corporations are systems that are built to resist the will of their components (people!), full of controls and corrective systems to make a superhuman scale and often inhuman and inhumane behavior result. A person can’t move a million barrels of oil per day around the world, but a corporation can. It is capable of doing intelligence tasks a single person is not. It is a systemic harness for teamwork to achieve business goals, often despite what the members of those teams actually want. We’ve been reverse-centaured before, to the point of being ground to fine paste, and we will be again. This is certain. Our job is to mitigate that the best we can. We’re all doing our best, but it’s the coherent action of all of us that steers this ship.
The Dictation of Economic Forces
The economy, whether local, national or global, is a system like any other: in its normal operation, some parts are broken and most parts are working and working dominates the system. The broken stuff gets fixed mostly. It’s not perfectly efficient but it works.
Like any system, it can be perturbed into degenerate behavior. Wars can do this, bubbles can do this, exhausting resources can do this.
It’s really hard to tell this apart from a technological shift while it’s happening.
A thing you learn early in systems theory is that feedback loops dominate behavior. The orphan crushing machine is made of economic feedback loops, and turning it off requires political power sufficient to alter the system, or to dismantle its feedback loops into something better. Usually the latter is what happens, because building political consensus is very hard.
You can’t un-create technology. Genies won’t go back in the bottle.
All this is to say that resisting AI by pulling the reins in a general “AI is bad” way will in no way stop it, won’t even slow it down. This is not a meaningful way to change anything. Jumping in front of the orphan-grinding machine only adds a little adult human to the slurry.
Anything that makes us more productive is a competetive advantage, and in a system that allows winners to take all, we can follow suit or not. The feedback loops are swift and often decisive: adopt or die. While productivity is not a single dimension but complex, it is very easy to end up out-competed on a meaningful axis and lose in the marketplace. Would that we could restrain this system - we have in the past, but today our protections are outdated, weak, or absent. Strong antitrust law and imposing costs on players that get too big would do our world a heap of good. That is not, however, the world that many of us live in. Regulation and governance systems have not caught up to the invention of the multinational corporation nor the well-funded agile disruptor. We must plan accordingly and work to build and rebuild those capabilities. We have to talk openly about all of this, too, because factionalizing is yielding a powerless group with all the awareness of the harms, and a powerful group in a situation where winner can take all. If we work together we might just improve it. It won’t be easy.
That is not, however, to say that we are powerless.
The Crossroads
We stand at a crossroads. (We always stand at a crossroads.)
We can get in the driver’s seat and steer or we can ride in the back.
To go back to the story Manna - Two Views of Humanity’s Future, Marshall Brain lays out the choice that is before us, the choice that is always before us. At every moment we can be choosing between these things.
Will this technology put us in the centipedal position of consumers, subject to the whims of fascists at worst and technocrats who can only look at us as demographic statistics at best? Or will we take the reins and make sure that we are in a position to choose better?
In software work, right now, the sea-change has come.
Despite what widely cited but rarely read studies have said, productivity in many senses has gone up. This is not a linear process where making something more capable or efficient makes everything unambiguously better. New strains in all of our workplace systems have arrived with it. This is a moment of immense disruption.
Yes, the rush of playing with new technology has made many overestimate their real productivity gains. Yes, it’s actually hard to measure software teams outcomes. We do, however, have some figures coming out of the noise. It’s hard to argue with a startup generating a working and capable prototype expressing a use of their core idea every week to find product-market fit, and this is an everyday normal, not an outlier.
It’s hard to argue with people suddenly able to understand and summarize the problems in a large and complex legacy codebase in minutes. It’s hard to argue with consultants able to much more reliably generate estimates grounded in real facts for their work. In all the places it is possible to measure a definite outcome, real gains have happened.
This is not to say even remotely that this is unambiguously good. Persuing bad goals faster is in fact not a good thing.
We need to provide the goals.
Is the world one where we’re building systems to support abstract management goals, with no regard for the people in the system? Is it one where only “line go up” where the value gets captured?
Or is this a world where accessibility for websites is not just a thing we constantly have to remind, but a thing that happens nearly automatically, because it can be embedded in every prompt for the creation of software? Can we capture as much of this energy to benefit all of us as we can?
Competetive effects are still in play, but the cost of getting beyond serving the same 80% of people over and over has gone down. We can ask “what about X?” at a bunch of new places in the process.
The cost of reworking a plan to synthesize new ideas has gone down, dramatically. And if we’re the ones driving, we get to embed this, deeply.
Of course, the clay keeps growing.
Structures, high and wide
I’ve long had a weird relationship with the free software movement, because I’ve long believed it got caught up in its copyright systems hackery and stopped seeing the forest for the trees. There’s a reason the movement was coopted into free inputs for corporate development, mainstream foundations for hierarchy rather than liberatory structures.
I’ve always released most of my work under very permissive licenses, not because I agree with the “open source” movement’s sanitized goals, replacing protected freedoms with “anything allowed” openness, but because I think that the way to make liberatory software is structural, rather than tactical. Making something strong copyleft like using the GPL is a tactical decision: It’s to keep the software out of certain hands, usually, and even earlier, it was more liberatory, to keep control from being centralized. The ability to shift the locus of control of the software has always been a two-edged sword. Computing has changed shape significantly since the days of mainframes and recalcitrant printers. Not that it doesn’t rhyme, but the ways we locate control are different.
On the flip side, indie web experiments and some of the more fringe of the peer to peer software world have made strides toward collective ownership and development of important systems without getting coopted, not by copyright tactics but by structuring the software to communicate in collective-oriented ways. Nobody wants to steal your indieweb app for corporate uses because it is not built to support corporate goals, and never will be.
And the synthesis shows up in software like Tailscale, which to be clear is owned by a corporation (though as far as I can tell, one that’s behaving pretty nicely), and is friendly to open components that repurpose their work for novel uses. In addition, it’s spawned a massive growth of home-scale technologists building peer-to-peer home labs, software at personal scale that can still interoperate with the larger Internet, can participate in the world at large. Not all liberation is rebellion.
What we build matters: every thing that exists, supporting and supported by other things in a system starts forming the infrastructure for what comes next. The shape of it is remarkably persistent, because to change it you have to either let what depends on it fall with it, or account for it. There is a reason why Roman roads are now often motorways: it was far easier to build in place than to build something an entirely different shape. What we build now and ongoing and we make durable and connected will last.
The Soup
When we start working with software for LLM-based systems, we find that the ecossystem is a clash and synthesis of a bunch of disparate and often uncomfortable groups. Cryptocurrency anarcho-capitalists, intent on making the world into petty fiefdoms, early-bird-gets-the-rents structures have reset their sites on some parts of the AI world. Automated systems enforce code-enforced security systems and abuses on the world to reflect the shape of their beliefs, that if you are suffering in the system then it’s your own fault for not maximizing your gains and abusing the system like they do.
Fascists are absolutely here, vying for the power that manipulating media gives them and funnelling as much as they can into the misinformation machine, using the asymmetry of information warfare to distribute bullshit faster than democratic systems of verification, deliberation and accountability can correct it. They will be present in the glowing images and tasteless design of so much.
Technology is an amplifier for human ability. This is not a positive statement, nor negative, but a simple fact. This can be liberatory or this can serve a narrative of an increasingly resentful and demanding fascist populism. Fascism has always sought to capture technology and its processes and turn them to war and oppression.
Maybe most of all you will find technocrats. There is a deep and horrifying confusion of metrics for measures pervasive in the space: “evals” and “benchmarks” giving numeric quantities to qualities, stripping away context and only looking at numbers. They are to be resisted. Not to say all that work is useless, numbers usually indicate something. But the monomaniacal focus on numbers yields a complete ignorance of what cannot be easily measured. So too is talk of “alignment”: as if you can just tune a model on enough dimensions to be “correct”, as if alignment isn’t alignment with the ever-shifting and self-contradictory nature of human goals. I do not trust anyone who takes the “alignment” framing for “AI safety” seriously without critique.
The impact on work is where things are most immediately felt. In the Anglosphere, we have the pernicious, grim-hearted Calvinist views embedded in a lot of our culture: Any joy in relaxation is to be harnessed back into work until we are grimly productive. Idle hands are thought sinful, and this too is present in the moment.
There may be an economic pothole when one of the American AI vendors collapses. The financing of the big two, particularly OpenAI, is very strange. The deals are circular, there is certainly some part of all of this that is a bubble, and what gets caught in the blast when the music stops is anyone’s guess. However, there is no going back to a world before the invention of the LLM. On hardware that an average programmer might have in their house, it is now possible to run a useful model locally, if painfully slowly. It’s not going to change lives in that form but very normal processes of technological improvement will carry that well into permanently being true. One can run this in one’s living room if a little fan noise doesn’t bother you.
A bright thing in the soup is the theory of constraints. Creating source code in particular is much, much cheaper than it ever was. This does not mean software development necessarily goes faster: it means we find the next bottleneck.
That bottleneck is now human understanding and goal-deciding. That’s where we, humanity, come in. The systems cannot in fact take off without us entirely. Only if we refuse to steer will it careen over the cliff.
Where do we go from here?
We can do better than nothing at all. We can steer the chaotic change. Mandates to “Use AI” are everywhere in the corporate world right now. Where programmers have often but not universally adopted tools enthusiastically, not many domains are as well suited. The further from the core of easily-checked work of programming and mathematics and into the messy world of relationships and context and embedded and embodied process, the worse “AI” tools perform by default. Mandates rarely capture true value, because they are anathema to the care required to adopt these tools well. This means there is waste we can hide good things in.
We can entreat our bosses to remember that glue work is not easily replaced, and that we have no excuse not to do the important, we can organize, and we can choose where to spend tokens, and often choose which models to use them on.
If you’re mandated to use AI tools at work, which is better? Burning a million tokens on Claude Opus 4.7, creating a deluge of source code to inflict on your coworkers that they cannot meaningfully review and only stamp “Looks good to me” or reject? Or is it better to pick a small model and a small but important task that wasn’t going to get done anyway and do it, cheaply and quietly?
We can use LLMs to mitigate other social harms. Not all of them, of course, but we can direct where some of this attention goes.
“I don’t have time for accessibility” is not true anymore. “I don’t have time to write tests” is not true. “I don’t have time to write documentation” is not just untrue, but your job now is to mold the documentation into something actually useful, carving out and humanizing rather than adding to. “I don’t have time to consider the impact on any of the disadvantaged groups I know about” is also now patently untrue: it’s five words in a prompt and then dealing with what you learn. It won’t always be right, there’s bias out there but the systemic excuse of there not being time is no longer true.
We can refuse to use LLMs to communicate with each other. Every email, every word of this essay, every text message I send is hand created and I urge you to pledge the same. I do not use image generators, I want no part in generated videos. They must be soundly rejected. Where software has some artistry at time, usually in abstract ways, the object we create is first and foremost understanding. It is natural to use models of the world in making models. Art, however, is communication, and communication is humanity. When we delegate the very things that let us relate, that is hostile to the very nature of being human.
I think there is one important place where we can, and in fact, should use these tools, with care and attention and with as much honesty that it has sharp edges and harms possible. We can now, very very simply, communicate with people who do not share a language with us. Imperfectly. Mediated by a machine that does not grasp the nuances of language and our context. However, machine translation enables an entirely new kind and scale of human connection across language boundaries. A particularly clear example is that during the first TikTok ban in the US, a bunch of Americans fled to 小红书 (Xiaohonghshu) for their TikTok shaped video fix, and were greeted first by a few Chinese-speakers who also spoke English, but then by a hastily but honestly impressively created bidirectional machine translation system that enabled a truly fascinating and wonderful exchange between two cultures that are usually isolated not just by the Great Firewall but by a language barrier. That system is still in place and you can try it yourself. We can learn a lot from this. I may go into detail of the pitfalls and wonders and things this can enable, but not in this already very long essay.
We can refuse to generate the cute image for a slide at work and remember and learn how to be funny or droll or apt or illustrative on our own. We can hire artists to make us an image. We can share our as-yet amateur attempts. We can hire others to do it too, when it matters.
It is possible to create artistic meaning using AI tools. However, the path from there to delegating our humanity is painfully short. I am quite comfortable rounding this to a simple idea: Do not. We must protect our ability to both do and to sustain art, and generated images and communication is perhaps an even larger threat than the corporate contentization and eternal copyright of it ever was. We should not live to work, but when we work to live, we must have something to live for. This is it.
We must become acutely sensitive to interposition into our human nature to communicate and connect with each other. This is not just about “AI”, though that’s particularly salient now, but all the interposing between us that companies do should be soundly rejected. Personalized feeds, algorithmic curation. I even loathe recommender algorithms, and suggest you do too. I find no joy in a computer telling me I may also like something, given my taste. But from people who see me for who I am? That’s gold.
In the end, we have the choice to make. Every time someone suggests something to deskill, destroy knowledge, or control communication, we should use the tools at hand to bend that to our collective benefit instead. Every time we use one of these tools, we must ask ourselves what human relationships it is replacing, and when the answer is uncomfortable, we should seek to strengthen those relationships instead. We are now more than ever needing to find collective sense-making, which means a new openness and vulnerability in talking to each other is required. This will not be easy.
Let’s get to work.