If you haven't read the book Superagency: What Could Possibly Go Right with Our AI Future, by Reid Hoffman and Greg Beato, I urge you to do so.
Reid and I have been having an ongoing conversation for years about spirituality. He calls himself a “mystical atheist,” and though I’ve tried, I haven’t yet been able to convince (or convert) him to drop the “atheist” part.
But how we connect with something larger than ourselves is in fact very much a part of Reid’s life. He’s talked about how it might surprise people, given his success as an entrepreneur, that it’s friendship, and not business, that is his “primary spiritual home.” As he put it: “It’s through friendships we grapple with the essential questions of who we are and who we should be. Questions like: What is a meaningful life? What are our larger responsibilities as communities and as individuals? How do we make each other better as people, as human beings? The thing that sustains me — and has over the decades — are the friends who I share these kinds of conversations with.” As his friend, I can attest to how true that is.
And in chapter three of Superagency, “What Could Possibly Go Right?,” the authors raise the question: What if AI could “help us become nicer, more patient, and more emotionally generous versions of ourselves,” which would make the world become “superhumane”?
I love that term, superhumane, and the idea behind it — that AI can help us tap into the most deeply human parts of ourselves, not just giving us more agency but helping all of us build our own primary spiritual home.
Hoffman and Beato write that “most concerns about AI are concerns about human agency.” And AI agents will help us be more productive, enable us to learn more easily and execute complex tasks. We will, in effect, be orbited by AI agents doing things for us, but what I’m most passionate about is not just what AI can help us do, but who it can help us be.
Agency is about doing things and deciding things, but who is doing the deciding? Can we build AI that will help the person making those decisions be a better person — and not just an AI that better executes the decisions once the decisions are made? Yes, all these AI agents will give us more agency, and, collectively, superagency, but we need to also focus on the human agent.
The authors write about how worried people are about AI, and how people have been similarly worried about other transformational technologies. But the book, and AI, are coming at a time of bigger worries. As a culture, we’re incredibly polarized. We regard anyone with whom we disagree as a heretic. And heretics, even when they’re not burned at the stake, are dehumanized, canceled and denied empathy and the possibility of redemption and understanding.
So how can we build AI that helps us not just be more productive but be more empathetic, deepen our humanity, allow us to see more humanity in others and bring out the better angels of our nature?
Since Superagency serves as a comprehensive agenda for our collective ongoing conversation on AI, this question is one I’d love to move higher up on the agenda! Especially since we live in a time when augmenting our humanity is our most urgent need.