High on my long list of things that make me feel completely insane is the fact that we have entered the era of technologies that we can plausibly call artificial intelligence and nobody seems to have anything that remotely resembles a plan — unless you consider the techlords’ schemes for planetary domination a plan, which I suppose they are. But I’m talking about a plan for coping with this technology on a civilizational level that makes life better and more democratic and less alienated, rather than infinitely worse and more dystopian and less free.
One symptom of the total capture of the state by evil capitalists is that even the concept of having a plan for this technology seems to be considered beyond the pale by our governments. It’s either that or the people running things literally just don’t understand what’s going on, which is I guess also possible because a significant proportion of these people are effectively walking corpses fuelled by pure hatred of life itself and propped up with just like piles of lobbyist cash. They would be hard pressed to rotate a PDF let alone imagine the consequences of a rapid technological transformation, especially if they had to imagine the impacts of this transformation on ordinary people, whom they regard as annoying bugs. Either way it’s incredibly concerning.
When I say a plan, I mean what policies are we envisioning to help us weather the impacts of AI? What do we want to use it for and not use it for? Who should be allowed to own it or control it? What limits should be set on its development and implementation? What laws should govern its use? What long-term vision do we have for its place in our societies and economies? Why is this not even a conversation?
This is even leaving aside the question of whether we want it at all. Personally I despise this shit and try to turn it off on every device and app I use (though this is becoming increasingly impossible). In general it creeps me out and on a spiritual level it strikes me as an escalation of already terrifying trends toward total disconnection from the real material world. This being said I am, as they say, a student of history, and I’ve noticed a thing or two, such as the fact that technologies, once invented, are essentially never put ‘back in the box’, except sometimes when there is catastrophic social collapse and people literally just forget how to do things because too many people die all at once.
So I might not like AI, but it exists, it has diffused everywhere, the technology behind it is well-understood, and there’s essentially no chance of it disappearing. Fine. I accept this, with bad grace. But then this still leaves me with the question of what the hell we are going to do about it. Because it seems obvious that there are enormous questions at stake here, and the gormless goblins running this continent appear to have no answers whatsoever.
What are some of these questions? First and foremost I think there’s the ‘simple’ matter of the social ramifications of this technology and the future iterations of it that are sure to follow. Currently AI is prone to a lot of fuckups and for many serious uses can’t be trusted not to hallucinate wildly. We’ve all heard of lawyers using it to write legal arguments and being outed when it comes to light that the AI made up whole cases or laws, for example. I’m told that AI is quite good at writing code, but I imagine you’d still want humans checking the code if you’re planning on using it for anything important, because the chances of it just inventing lines of code are significant. When it comes to writing copy, AI can be surprisingly good at first glance, but again has a tendency to insert random errors, and also it has a pretty unmistakable writing style that most educated people can recognize right away. But frankly it won’t stay this shitty forever. There are too many people being paid too much money to work on it. I don’t pretend to understand the technical side of this but I understand how historical trends work, and I know that soon they are going to marry the pattern-recognizing LLMs to other kinds of clever programs and algorithms, and in the context of the continually exponentially shrinking price of processing power, this will result in AIs that are much better at doing what we want them to do. This will not take a long time to happen. So what then?
Is there any way this could not result in extreme shocks to the economy? Won’t AI be widely applied to practically everything within like 20 years if it allows capitalists to get work done without workers? Is anybody proposing protections for the people who will be directly put out of a job by AI? Let alone measures to protect them from being put out of a job? Are we thinking about what jobs might be vulnerable to AI takeovers, but which absolutely should not be permitted to be taken over (teaching comes to mind)? Speaking of teaching, what about the effects of this technology on the way our societies work more broadly? The gibbering freaks of the Trump administration seem to think putting AI in all the schools is a great idea, but they don’t seem to necessarily understand what it is and they’re also abolishing the Department of Education, so who the fuck knows. Personally I think it might be a good idea to think about this a little bit, and perhaps restrict the policy conversation to people who aren’t so fucking stupid and evil that they’re basically orcs.
Some people argue, possibly correctly, that it’s all hype, and AIs suck at most of the stuff they’re supposed to be doing, and that they’re not going to get better. I feel like it’s unlikely that they won’t improve, but even that aside, we do know for a fact that AI is quite good for one thing — making life more miserable for precarious gig workers, who make their living on apps run by various kinds of AI and are at an incredible disadvantage vis-à-vis these opaque processes running their working lives. Capitalism already has a lot of opaque processes running people’s lives but the rise of AI gives the bosses yet another layer of plausible deniability and freedom from responsibility: we are doing it this way because the AI told us to do it, we can’t explain it because it’s just what the AI says works, no you can’t talk to a human being about it because there’s just the AI. This alone is cause for huge concern and a carefully elaborated set of policies to protect people from this shit, but nobody is bothering — in this case presumably because the people ruling us think this is an excellent perk rather than a disaster in the making.
On a similar note, one of the most obvious applications for AI is for use in surveilling and/or slaughtering surplus populations, a use which has been innovated to great international acclaim by the Israeli military. Lavender, their evil murderbot, is fed ‘intelligence’ and then spits out names; the names are then rubberstamped by officers and the people to whom they belong are then turned into mangled corpses, along with anybody nearby such as for example small children, old ladies and starving parents looking for food. When it runs out of names they just adjust the search parameters and voila, more names! Anybody running a murderous death cult would be thrilled to have such a machine at their disposal, and of course our governments are controlled by people very much interested in running murderous death cults. Frankly I would be shocked if the activists currently being disappeared by the secret police in the US weren’t being selected by a similar AI. Is this something we want? I mean I know it’s something they want, but do we want this?
And fuck — what about the weird lonely people who are definitely going to fall in love with the chatbots? Granted maybe it’s not really my business, but also maybe it is, maybe it’s everybody’s business that we’ve created artifacts which can mimic human connection so well that dumb people with mental health problems begin to form emotional attachments to them. It’s all well and good to laugh and shake your head at some lady in Europe who insists that she’s marrying an airplane or something, but airplanes can’t hold a conversation. The religious conservatives have spent my whole lifetime turning blue at the thought of gay marriage devolving into mass zoophilic debauchery, but you know what’s a thousand zillion times more likely than everybody deciding they want to fuck their cats? Your fucked up teenage son deciding he’s in a relationship with an algorithm. Good luck sorting that shit out without universal mental healthcare. Talk about being far from God.
Even deeper are spiritual questions about what the point of being alive is, what the good life means, and how much we should allow machines to mimic life for us so that we don’t have to live it ourselves. But these kinds of questions are ones that our societies have become remarkably bad at answering, or even asking. The Christians are no help, for the most part; the farthest they get tends to be insisting that God gave them the Earth to build Chuck E. Cheeses on and that everyone should be maximally fucked up about sex and sexuality. The secular world is of course also really into Chuck E. Cheeses and sexual pathologies. There’s not a lot in the way of conversation about what the actual point of it all is, not that leaks into politics anyway, other than of course the far-right hyenas with blood dripping from their teeth yammering about how we should bring back executing people for disrespecting the flag or whatever, or quivering techlords high on designer drugs suggesting that maybe we should be sacrificing the human population of Earth so that we can colonize other planets and build Chuck E Cheeses there.
But like: is it good to have tools that don’t just help us with our writing but actually come up with the words and speak for us? Does that bring us closer to the good life or further away from it? Does it make us more alienated or less to have our phones predict what we’re going to do next? Is it helpful for us, spiritually, to have apps that can generate pictures of ice-skating hippos to be used as emojis? Maybe this is less important to some people but in some ways for me it’s central, or at least it’s part of a conversation that I wish was more central, the conversation about where we are actually going with all of this wealth and power and technology that our civilization has accumulated, what are we trying to do with it? Is the goal to build a prosperous, free and sustainable civilization living in harmony with the natural world or is it just Chuck E. Cheeses all the way down?
This ‘conversation’ I keep talking about would require something resembling a democratic decision-making process around the future course of our cultures, and this is something we really don’t have. I dream of it though, and I hope that when we finally arrest the billionaires and dismantle their Saw movie of a society we will be able to build something that allows such a thing. China is not my ideal model of a socialist society but I have to say I do really deeply admire the way that the people in charge over there seem to be thinking a bit further ahead than like, 2028. I hope that one day we can build a people’s state here that has a similarly long view of history. In the meantime, we might need to start forcing these issues into the conversation a little bit, because if we don’t, the vampire class is going to have us all in thrall to psycho AI-run work apps, in love with chatbots, and surveilled by Israeli murder-systems very soon.
Like my writing? Help me keep doing it by subscribing (free or paid) or sending me a one-time donation:
I bought a (ridiculously inexpensive) year's subscription from you last time you published, on the strength of one piece, just to make sure I kept track of you. You are a superlative writer and it's good to hear from you again. This is an articulate and thoughtful piece and you're right, people have just been edging somewhat inchoately around the margins of the ethical implications of AI.