One of my more dubious longstanding habits is that when I have difficulty making a minor choice, I flip a coin. Now that I don’t often carry change on me any more, I use a coin-flipping app for the same purpose. Go to a movie today? Heads for yes, tails for no, or heads for a choice between Medjool dates or a ginger cookie, or for turning left or turning right. Easy.
The person who introduced me to the idea a long time ago, a confident and charismatic guy named Bob, said it was a good way of letting go of attachment to outcomes, and also, a good way of understanding that often, our choices don’t matter; what matters is making a choice.
The idea was appealing, even if Bob was otherwise unreliable, because though I have no problem with big choices, I tend to be anxious about trivial ones.
However, I noticed two problems as a result.
The first problem was that it became even harder to make even the most trivial choices without resorting to the coin, because making decisions is a habit that atrophies if you don’t use it.
The second problem was that because I am human, I began to attribute intention and intelligence to the coin, even if a coin couldn’t possibly have a brain. If I am asking the coin a question, and it answers, it must represent an intelligence of some sort.
For example, I didn’t go see the Superman movie today, because the coin told me not to; therefore there must be some reason I was not meant to go see Superman. I wish I wasn’t telling the truth.
It’s a dumb habit, and I stop doing it for long periods, but I have to admit that even though I would claim I am a dedicated rationalist, my brain is human, and therefore superstitious to the extreme.
I have been reading lately that more and more people are experiencing psychosis as a result of interacting with various forms of generative AI, including lately that last person you might expect, a venture capitalist. Perhaps I am being naive, but I would expect a venture capitalist to be a bit of a nihilist about anything except making money.
Just to be alive requires believing in something that isn’t present, though. For instance, in order for me to write this, I have to believe there are actually people elsewhere who somehow can read a version of my words, and that those people are something like me and not green things with teeth in their eyes.
Thus, I can understand the appeal of generative AI in terms of my coin-flips. As with the coin, you can ask Chat-GPT a question, and it will give you an answer, but it’s not a binary yes/no, heads/tails answer. It’s a well-developed and targeted answer (even if drawn from a wide trawl of the universe of utterances, statistically generated in response to a prompt) that is designed to appear not only plausible but personalized.
It’s a small step from believing an object is intelligent to believing it is oracular, especially if you insist on asking it questions it could not possibly answer, but which it does answer.
It’s as if my coin, landing in my palm, displayed not Lincoln’s profile, but a glowing sign that says, “No. Do not go see Superman, not if you value your life. Believe me, I know what I’m talking about.”
I have to wrap up this post now, because I have a few things I have to do: delete my coin-flipping app, donate my loose change to the nearest panhandler, and go see Superman.