The Way to Beat a Crisis is to Stop Needing It: The Unexamined Problem of Need
Ascesis is Your Friend, Your Muse, Your Help
To paraphrase Mr. Miyagi, when it comes to the crisis of AI, the best way to avoid a punch is not to be there. Some reading notes.
The Inevitability Machine Strikes Again
One of the most annoying tendencies that most writing about any current trend succumbs to is what historians describe as the Whiggish tendency of history. Dissected by the historian Herbert Butterfield in his The Whig Interpretation of History, the tendency runs something like this: “the past was this terrible, benighted kind of era which was always and ever moving toward a much better glorious era characterized by social and historical movements like X.”
This, of course, is rarely how history works. It always feels inevitable in reverse that certain courses of action were going to result in certain outcomes: the Nazis were always going to emerge because of social conditions X, Y, and Z. But then, things like a novel coronavirus sweep through the world and upset pretty much any thesis about how the world’s story was going to go, particularly because what happens next feels utterly unrelated to what came before it, even if many of the ongoing features of how we made it through crisis X are still present.1
Where the Whig version of history predominately shows up is not in the inevitability of success, but in the inevitability of disaster, in various forms. Because of the nature of the claim here, I’m going to limit this to the recent spate of articles proclaiming that Artificial Intelligence is now an inevitable game-changer with respect to our world.2 For the most part, I think that Freddie deBoer is right about these things:
You do not, in fact, live in the most important era of human history. You have not been lucky enough to occupy some sort of liminal period for our species. But you have a consciousness system that compels you to think of yourself as uniquely special and thus begs you to believe that you live in special times. The idea that you are somehow not important, the notion that the universe had no special responsibility to produce you, is in a very deep sense unthinkable to you. Now a new technology has emerged, and those who stand to make billions off of it are telling you: you will never be lonely again; the meaning you’ve always pined for will be provided for you by superintelligent beings; you will not die, but have eternal life. Or, alternatively, you are soon to witness the end of the world, which will free you from everything you don’t like about your life and yourself. Either way - people are telling you that something very, very important is happening, and right now is important, and you live now, so you’re important, and you want to believe, have to believe, are desperate to believe. And so you do believe, even though it isn’t true.
AI, for what it’s worth, is relatively easy on the scale of globe-changing disasters: it can be unplugged. To refuse to engage in deep and intensive ways online in certainly possible, and will require certain sacrifices, and probably some labor-intensive workarounds. And it is this that Whig-disaster-machines misunderstand:
A new world is possible, but it will require the death of the old one.
In my university, there is (rightly) a good bit of wringing of hands about the emergence of ChatGPT as an entity that will destroy a lot of the more mindless assignments that professors love: discussion-board posts, reader-response essays, book reviews. But good riddance: that it would birth a world which would put to death our emaciated thinking is not a bad thing. An uncomfortable thing, but not a bad one.
The reasons for the full-scale adoption of AI are complex, with some pushing for it as a way to work out large problems at scale, and others pushing for it as a way of simplifying mundane tasks no one wants to do anyway. And there’s no way of knowing whether the usage of AI will lend more toward its most benevolent and restrained usages, or toward its most banal and even reality-questioning uses.3
The Discretion of Need: Why Ascesis Makes Us Human
The inevitability machine gets its most mileage out of the proposition that our needs are more complex, and thus, our technologies must keep up with that level of complexity. Beloved, this is the same argument that was used to maintain chattel slavery in the United States, and to continue to bolster the coal industry, despite all evidence that these were world-destroying technologies. The problem is, Ivan Illich puts it, a perception of need:
There is no simple technological imperative which requires that ball bearings be used in motorized vehicles or that electronics be used to control the brain. The institutions of high-speed traffic and of mental health are not the necessary result of ball bearings or electronics. Their functions are determined by the needs they are supposed to serve—needs that are overwhelmingly imputed and reinforced by disabling professions. This is a point that the young Turks in the professions seem to overlook when they justify their institutional allegiance by presenting themselves as publicly appointed ministers of technological progress that must be domesticated.4
The “disabling professions” are those professions which are built on the premise that people don’t need to have skills: these are professions which don’t just do a task more quickly or excellently, but actively try to prevent ordinary folks from doing analogies to those tasks. The technologies for these now-defunct-skills multiply because of our assumption that only experts can try their hand at such-and-such. And as such, needs, he says, become industrialized, baked into the fabric of our lives that we can’t think about not having not only the solution, but the need itself.
The heart of the inevitability engine, thus, is not progress, but unexamined need, the constant assumption that what is provided is not only more expedient, but necessary for flourishing. Here, the uncareful language of “flourishing”—that what our ethics is about is for the full growth of the human life—is partially to blame. For, as Stephen Meaward’s new work reminds us, sometimes struggle is part of what makes the virtues tick: what the ancients called askesis—a self-denial, in fasting or modesty or absention—is what helps us to flourish. For flourishing is not to have all possible desires met: flourishing is to be able to distinguish between true need and falsely inscribed need, and to say no to the false ones.
This weekend, we took the kids to their first major league baseball game. I had printed off the tickets, and on a whim, downloaded the app5 with the tickets on it. At the front gate, I was told that only the app would work. The subtle insinuation, of course, is that a smart phone is not only desirable for modern life, but now, at the door of the stadium, necessary.
And somehow, against all odds, yesterday, the kids gathered at dusty backstops with their respective teams, with gathered equipment and homemade pitching mounds, and practiced their pitching and hitting without the intervention of an app.
Reading: Going back through the Howard Thurman corpus, and making fast progress on the reader. Like anyone who writes for fifty years, he revises and repeats himself, but there are some real stellar works that I want to sit with for some time to come. The starter kit here is Jesus and the Disinherited, The Luminous Darkness, and The Search for Common Ground. Out of these three works pretty much everything else spins, I think.
There are always threads that come through the knothole of history: COVID, for example, didn’t upset the general trend toward utilitarian decision-making. If anything, it probably made it worse, and more likely that societies will function that way. When you have a major health crisis that forces entire communities to start thinking in terms of what is good for all persons, that imaginative frame is hard to unwind
Utilitarian decision-making emphasizes decision making based on probable positive outcomes in order to maximize social happiness. There are varieties here: some versions emphasize the greatest happiness for the greatest number of people, while some emphasize a trickle-down effect of happiness which focuses on the good of a few who will have downstream benefits for a greater number. It proposes to solve the tricky problem of how you make decisions for a large group of people, but tends to operate by maximizing one version of happiness over all other possible versions: we’ll buy ice cream for everyone, and if you didn’t want to go get ice cream, then you can have whatever soda the store sells and just be happy about it.
To be clear, I do think that there are many things that the rise of AI is exposing about what we’ve assumed to be human capacities, and not all of those exposures are bad. If AI has exposed the fact that many of our well-paid jobs are basically made up, and replaceable by a self-teaching algorithim, it’s probably okay for us to recover more humane forms of work.
Do you really want to put money on whether or not humanity will, by and large, make use of a world-altering technology in prudent and beneficent ways?
“Useful Unemployment and Its Professional Enemies”, in Toward A History of Needs, 48. That this is written 45 years ago doesn’t make it any less true.
Because there’s always an app.
Great words, of course. Also, and on a semi-related noted, how did you pull off footnotes?!?
Thank you for giving words to my antipathy for culture requiring thneeds