I have read a great deal about so-called “artificial intelligence” (I recommend Emily Bender’s The AI Con, for instance) and could write a dispassionate analysis of the pros and cons of large language models and all the other kinds of things that are called AI. But I won’t. I’m going to talk about the personal reactions I’ve been having.
I grew up reading classic science fiction (starting when I was eight, with my uncle’s collection in my grandmother’s house), and became quite an aficionado by the time I was in my teens. I had a collection of (alphabetized, much re-read) titles by the great masters and the new wave of writers. Thoughtful robots and intelligent computers were a big feature, at a time when in real life (the 1960s) computers were immense, persnickety things. Protagonists could talk to machines and get interesting answers, and even emotional responses, and the authors of those books were surprisingly prescient about some aspects of the relationship between human beings and their devices.
The Sturgeon story “Killdozer” comes to mind, though I think that bulldozer got infected by an alien virus before it started killing things, and then there were the Asimov stories about robots, and the Karel Capek story that gave robots their name in the first place.
Most of those authors understood the appeal of having something you could talk to that wasn’t human and would give you dispassionate answers and constant support, though.
I don’t read science fiction much these days, because I live in the future and it’s all moot, though I was recently working my way through the Charles Stross fantasies which combine evil demon overlords with modern-day bureaucracy in a chilling manner, dealing with the same kinds of worries as science fiction but in a slightly different genre.
I was an early adopter of personal computing. I have an iPhone, an Apple Watch, an iPad, and Mac, and I use all kinds of technology with great enthusiasm. And I can see a lot of utility for computerized systems for things like indexing, organizing, and summarizing large bodies of information. So why am I digging my heels in about the current wholehearted corporate enthusiasm for artificial intelligence?
It might be the same reason I don’t own a television any more, nor do I have a Facebook or Twitter account any longer. Maybe it’s because I’ve seen the Overton Window shift drastically in one decade, only to be repudiated the next. Maybe it’s just that I’m contrary. All of those are very good reasons.
Or maybe it’s because a number of things in my life have taught me to be deeply suspicious of anything that is unreservedly supported by capitalism. Corporations are not diving into AI because they think it will improve the world, they’re doing it because they think they can get rid of employees and hire them back for lower salaries, meanwhile making a lot of money, often through manipulating the market. “Move fast and break things” is also a philosophy that is still hanging in there, even though it’s a credo of burglary and riot more than an ethic. But that’s not the whole story.
No, I think it’s mainly my knowledge that human beings are complicated, highly social primates with brains that don’t work like computers, and though in some ways we are intelligent, in most ways we are not rational at all. Though we are often deeply mistrustful, it’s usually mistrust of the wrong things and the wrong people.
The fear of “Stranger danger,” for instance, is bullshit. Children are abused by people close to them, in families, neighborhoods, and schools. But no, we fear strangers instead. I’m not going into the tiresome evolutionary psychology explanations for that, because that explanation also may be bullshit, but the fact remains that we are afraid of the wrong things all the time.
But the irrationality that is most pervasive is that human beings attribute personality to things, and the combination of the plausibility of large language models and other forms of AI, a baked-in trust in technology, and human irrationality means that we start treating a technology as if it was wise. No, it’s not even sentient, not yet, and it’s certainly not wise. Nor are human beings, for that matter.
Trust in the wisdom of AI means attorneys submit pleadings with fabricated citations. Writers produce “books” that are generated out of cobwebs and all the other digitized books in the world. People ask for relationship advice from something that produces a weird sort of stochastic sludge, one that can be shaped like clay into anything the questioner finds acceptable.
It’s not that I think human beings are any better at producing books, legal pleadings, artwork, or therapeutic counseling.
It’s that we seem to think it’s all right to replace human beings with something else.
I’m not sure what the point of existence is, if we are willing to hand all our labor and our social lives over to something that is not us. Human beings like work, and if you take away all the horrible injustices of labor, you find that we will start gardening, doing woodwork, writing novels, and solving all kinds of puzzles with great intensity. We will start businesses. We will have families. What are we supposed to do if not that?
I get that it would be nice not to have to go through all the pains of living. It would have been nice if I didn’t have to be the local daughter for my afflicted mother, or see my husband through his long and painful death from cancer. It would have been pleasant not to figure out over and over, over decades of teaching and many failures, how to be the best teacher I could be.
I could just drift, if AI took over.
But will AI watch television for me? Go to the gym for me? Raise my grandchild? Graduate from college for me? Or for everyone else? And why would we want it to be that way?
We are social by nature, because we are primates, but we’re bad at it. Our interactions with other people are intense, flawed, and complex. If you replace our irritating families, our infuriating coworkers, and our fellow passengers on the bus with a fabricated entity that goes along with us all the time, I can see the appeal but I am not sure what the point is.
All I know is that while I was typing this, with my fast fingers on my quiet little keyboard, autocorrect (a primitive form of statistical inference) kept persistently trying to make me say something else entirely, and it kept being wrong. I had something I wanted to say, and it wasn’t what my (non-sentient, statistically-driven) computer kept suggesting.
I don’t know that I said what I set out to, but I had the opportunity to think and I enjoyed the heck out of it. Perhaps I am incorrect in some of what I said, but I prefer to make my own mistakes. Mistakes are part of being human. I just wish we didn’t make so many of them, so persistently. And unreflective dependence on AI would be one heck of a mistake.