Can robots make good Christians? As computer science races ahead, at least one forward-looking Florida pastor sees a future for the faith in whatever passes for a soul in robots, androids, cyborgs and other forms of artificial intelligence.
No, I’m not making this up.
When the Rev. Christopher Benek, an associate pastor of the First Presbyterian Church of Fort Lauderdale, talks about artificial intelligence or “AI,” he writes in a recent online essay, “I am not talking about iPhone’s Siri, a Roomba vacuum, or one of those toasters that can make perfectly timed toast with a likeness of Jesus on it. ... I am talking about an autonomous creature that has self-awareness.”
When something not only can think, reason, plan, learn, communicate and perceive things but also “feel love, sadness, compassion, joy, affection and a multitude of emotions,” he writes, then it is not a great leap to think that “an AI that is very much like us but exponentially more intelligent (could) participate in Christ’s redemptive purposes in the world” and “help to make the world a better place.”
I see his point, although they also could make the world a worse place, too. Imagine, for example, robots of different denominations getting into a dispute over who has the best lock on eternal life after their lease on this life burns out — if it ever does.
Yet, at a time when much of the religious and political world seems to be at war with science, Benek has gained international attention with his visionary ideas about how ethics and morality can survive in our rapidly changing techno-future.
Ever since IBM’s Watson computer beat two former winners on “Jeopardy!” in 2011, interest in artificial intelligence seems to have accelerated, along with anxieties about what it means for the future of us mere humans.
Best-selling author Ray Kurzweil, a director of engineering at Google, has become the most widely known prophet of “singularity,” the widely theorized time, perhaps as soon as 20 or 30 years from now, when computers will be as smart as humans — and proceed immediately to becoming much smarter than humans.
The chilling possibility that like Bender, the roguish robot on “Futurama,” future AIs might want to do without us “meatbag” humans has caused widespread android anxiety. In January the famous physicist Stephen Hawking and adventurous SpaceX CEO Elon Musk pledged to do all they can to make sure that artificial intelligence will benefit humankind and not destroy our species. Good luck, guys.
Meanwhile, trepidation about our robot future seems to be popping up with new vigor in popular culture, where science fiction has long been an outlet for our industrial-age anxieties.
The new movie “Ex Machina” offers AVA, a strikingly attractive female humanoid, and the haunting existential question, “Does Ava actually like you? Or is she pretending to like you?”
Only a month earlier we had “Chappie,” the story of a police droid who becomes the first robot with the ability to think and feel for himself. Adventures ensue.
Still to come: “Avengers: Age of Ultron,” in which the villainous robot taunts in the previews that like Walt Disney’s Pinocchio, “There are no strings on me.”
And at Christmas, we are scheduled to see George Lucas’ latest “Star Wars” sequel. That means the return of star droids R2-D2 and C3PO with the sort of AI that we humans love: They don’t let their superior intelligence go to their heads — or wherever else their central processing units might be installed.
If the sci-fi world is our guide, public concern about the future power of AI looms in the background of our lives whether we want to confront it directly or not. Some sort of regulatory safeguards might well be in order, but we can hardly expect Washington lawmakers to help us get along with robots when they can hardly get along with one another.
Besides, AI is way beyond the technical know-how of a Congress that seems barely able to figure out net neutrality. They aren’t alone.
Meanwhile, research in artificial intelligence and our uncertain robot future forges ahead. Benek’s ideas about bringing salvation to robots doesn’t sound so nutty after all. He actually raises an important question: If your supercomputer loses its moral or ethical way, who’s going to tell it?