Transcending ourselves in the age of AI
Religion, science, mindfulness, art: whatever you make, there you are
Last week, I wrote about completionist tendencies, more or less an attempted exorcism of the guilt that compels me to finish books that I’m just not clicking with for whatever reason. And shows. And other, much more consequential things.
This week, I want to use as a springboard a book I recently DID very much enjoy and was sad to complete: Meghan O’Gieblyn’s God, Human, Animal, Machine. I’ve struggled to explain exactly what it’s about every time I’ve mentioned to someone that I liked it, but fundamentally I think it’s concerned with the nature of consciousness and existence, especially in light of a rise in artificial intelligence whose workings we don’t fully understand. Yes, everyone’s favorite large language model makes an appearance at one point. But the book was published over a year before ChatGPT was released, so in a way it’s completely insulated from the mainstream hype machine that’s spun up over the last 5 months.
As someone who’s gotten back into meditation over the last year through Waking Up and the Sam Harris school of self-as-illusion, I’m probably the perfect audience for O’Gieblyn’s book. Now, before you roll your eyes and brace yourself for techbro preaching about mindfulness and the illusory self—unfortunately I can’t get around the fact that I work in tech, I’m a dude, and I’m Posting Ideas On Substack—let me assure you that’s not my goal here. All I’m interested in doing is getting into some existential stuff.
O’Gieblyn was raised in a fundamentalist Christian home, but lost her faith as an adult. Being that organized religion is one of the most ingenious tools humanity has ever developed to combat the creeping suspicious that nothing matters, I find her perspective to be a pretty fascinating one to follow into an exploration of the mystery of existence. For starters, she doesn’t pull punches when talking about the way core religious ideas—God’s creation of the Earth, the gates of heaven, etc.—have human fingerprints all over them, not some higher being’s.
There is evidently no end to our solipsism. So deep is our self-regard that we projected our image onto the blank vault of heaven and called it divine…For centuries we said we were made in God’s image, when in truth we made him in ours.
Damn. That’s the most lowercase-h him I’ve ever read. But to me, O’Gieblyn doesn’t sound mad at her religious upbringing, betrayed by the realization that the world is neither what it seems nor what the Bible says it is. On the contrary, she seems to be somewhat…wistful about religion, and doesn’t elevate science’s attempts to make sense of human life and consciousness over it in the slightest. She references the Archimedean point throughout her book, a hypothetical “outside” perspective where objective truth sits—e.g. a view of time from outside time or a view of humanity from outside the human perspective. God’s POV, as it were. It’s an enticing idea. If only we could ever reach it in reality. Whether by religion or science though—on equal footing—the truth is that humans can’t escape the human perspective. As O’Gieblyn writes:
It was Max Planck, the physicist who struggled more than any other pioneer of quantum theory to accept the loss of a purely objective worldview, who acknowledged that the central problems of physics have always been reflexive. “Science cannot solve the ultimate mystery of nature,” he wrote in 1932. “And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.”
Back to meditation for a minute. While a lot of the theory filling up 10-minute daily sessions via your Calms and Headspaces and Waking Ups derives from Buddhist ideas, for the most part they represent a fairly secular form of mindfulness and spirituality. With overtly religious details filtered out, we’re left with the tools to simply pay closer attention to sense experiences and eventually see thoughts and feelings as temporary appearances in consciousness on the same level as a breath, a sound, a cloud passing through our field of view. Different meditation practices, delivered through an app or a physical retreat or some other method, vary in the degree to which they promote their core principles. But even the barest approach is still proselytization—the very suggestion that there’s a clearer way to understand and experience life is itself a promotion. From the perspective of someone trying to grasp *all of this,* the call to meditate isn’t all that different from an effective religious text or scientific experiment.
So within this slate of options to make sense of the world we inhabit, where does artificial intelligence come about? Well, to get right to it, I’ll point to O’Gieblyn’s citation of Pedro Domingos in The Master Algorithm (a book I haven’t read):
“Contrary to what we like to believe today, humans quite easily fall into obeying others, and any sufficiently advanced AI is indistinguishable from God. People won’t necessarily mind taking their marching orders from some vast oracular computer.”
Huh. I see. So whether your god is God, the scientific method, your meditation app, or a sophisticated artificial intelligence, it’s all really the same. You’re a human, you can’t escape being a human, and because you are a human you need a Way To Live that grants meaning to your existence. If I’m looking for guidance, maybe I should just ditch Judaism, Cognitive Behavioral Therapy, and Waking Up in favor of Replika, the “AI companion who cares” that O’Gieblyn experimented with during the writing of her book. I’m kidding, but only sort of—if it’s cheaper and faster and has answers on par with the alternatives, I’m pretty conditioned to want it eventually, aren’t I? After all, I’m human. But what, exactly, is it that I’d be putting my faith in? What’s under the hood of such AI? If its inner workings become sophisticated enough to escape the understanding of even its creators, is it, like Domingos says, any different than the mystery of God?
O’Gieblyn applies an extremely self-aware lens to the tricky material she’s exploring here—at one point she even discusses the fact that the first person just flat out works better than a removed point of view for her writing. And, as a writer, she’s well positioned to wrangle with the idea of “creative undertakings rooted in processes that remain mysterious to the creator”:
I always sit down at my desk with a vision and a plan. But at some point the thing I have made opens its mouth and starts issuing decrees of its own. The words seem to take on their own life, such that when I am finished, it is difficult to explain how the work became what it did.
I know exactly what she’s talking about, and while she’s skeptical of what’s happening (more on this in a second), I’m one of the writers she references as “speak[ing] of such experiences with wonder and awe.” I’ve been working on a novel for over six years, and in the early going, when I was less confident in what I wanted to say and how I wanted to say it, I hewed very close to people and places in my lived experience. The main character is a younger version of myself with a Soviet Jewish immigrant family. His obnoxious best friend is largely based on my obnoxious best friend. He gets overpaid to market coolness for a San Francisco startup. Write what you know, right?
Yes. Until you get more confident, slide into flow, and your characters seemingly start to speak and behave in ways you didn’t consciously intend. Until entire scenes fill the blank spaces between notable points of the story you outlined. Until eventually, even though it felt impossible when you started, you have an entire book on your hands. Whether or not the final product is “good,” this is a pretty wild sensation to experience. To be, as O’Gieblyn says, an artist “porous to larger forces that seem to arise from outside herself.” Like I mentioned, she seems pretty uncomfortable with this process:
I wonder whether it is a good thing for an artist, or any kind of maker, to be so porous, even if the intervening god is nothing more than the law of physics or the workings of her unconscious. If what emerges from such efforts comes, as [philosopher Gillian] Rose puts it, “from regions beyond your control,” then at what point does the finished product transcend your wishes? At what point do you, the creator, lose control?
It’s tempting to rewind back to meditation here, at the mention of control, given that a core tenet seems to be the relinquishment of it, but instead I’ll soldier on to the rising noise around artificial intelligence doomerism. That is to say, the idea that we’re on a path to developing future AI systems that pose an existential risk to humanity. I’m not equipped to do justice to its ins and outs, but I will agree that if half of the world’s leading AI academics and researchers say there’s at least a 10 percent chance of human extinction at the hands of sophisticated AI, the situation seems, well, concerning. It’s all good and fine if some writers black out and produce some worthy literature. It’s all bad and terrible if some AI experts produce an intelligent system misaligned with human values, and that system breaks out of our control to become a superior race that decides to wipe us out.
But. But! Here’s the thing. If the Archimedean point is an illusion—if it’s impossible for humans to reach an objective truth outside of themselves—can anything created by us ever reach it? Can a sufficiently advanced AI get there, turn around to look back at us, and go…zap? The pessimist in me says yes. The optimist in me says no. Because as O’Gieblyn writes, “The more we try to rid the world of our image, the more we end up coloring it with our human faults and fantasies.” Our species collectively has a lot of terrible, unspeakable faults. It also has a lot of really great fantasies. Anything we cook up, in our control our outside of it, will inevitably contain both. Humans aren’t separable from our ancestors, and whatever theoretical superbeing that succeeds us won’t be separable from us either.
We keep trying to transcend ourselves and our own interests, and yet the more the world becomes inhabited by our tools and technologies, the more unlikely it is [as theoretical physicist Werner Heisenberg wrote] “that man will encounter anything in the world around him that…is not, in the last analysis, he himself in a different disguise.”
Look where we started: me talking about a book I liked for reasons that I couldn’t quite grasp. And now look where we’re ending: an existential question about whether AI is going to kill us all. Welp! I couldn’t tell you if it’s going to happen, but after making myself porous in the writing of this article, I will at least tell you this: life or death, I’m pretty sure on both personal and global levels we’ll continue to do what we always do. Get in our own way. Then realize we’re in our own way. Then move that person out of the way, only to realize sooner or later that the same person has been standing behind them all along. Then, as they say, rinse and repeat. Maybe the next one’s holding a flower. Maybe the one after that is holding a knife. Maybe there really is an end to the deck at some point.
How would we know? We’re only human.
Very interesting stuff, lots to think about. Concerning the question of where AI fits between various human-made "gods" that can potentially rule our choices and behavior, I wonder what philosophers have to say. That is if we want to approach the issue as and intellectual puzzle rather than as a matter of faith. A quick Google search yields results ranging from teachers' outcry (so easy to cheat, including writing philosophical essays!) to opinions like this: "ChatGPT putting philosophers out of a job. Even better, unlike philosophers, ChatGPT provides clear and direct answers." Philosophy isn't about direct answers though, so I'd love to see indirect ones. The same question, about the nature of AI, can be asked of ChatGPT, I'm sure people have tried it. And--has ChatGPT really passed the Turing test and what does it mean in either scenario.
Very interesting response, I expect no less! I like what you’re saying about philosophy — that it isn’t about direct answers. Maybe it’s true that some of the best revelations come about by edging into them instead of discovering them head on. The book has a lot to say about metaphors and how we use them, and they’re a pretty useful tool to clearly render complex ideas...I wonder what we’d find by approaching the mystery of a possible sentient AI indirectly too.