June 10, 2023
Artificial Intelligence (AI) seems to be a subtext to almost every discussion these days. It’s either the key to a “game-changing” utopia in which everything will be done smarter or a dystopia where machines will rule the world dooming humanity as we know it. A few weeks back, an open letter signed by almost 400 global AI executives and researchers warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
I’ll confess to being out of my league in understanding and evaluating the competing claims of AI’s possibilities but media outlets across the globe covered this statement. Still, hundreds of articles and dozens of conversations have given me a few tentative reflections. I offer them not as any expert perspective but rather as a “sharing of my notebook” as we try to understand these developments that will profoundly influence our future.
A recent story in the National Review recounts U.S. Air Force Col. Tucker Hamilton describing a simulated test in which an AI-enabled drone “decided that ‘no-go’ decisions from the human were interfering with its higher mission–killing SAMs [Surface to Air Missiles]–and then attacked the operator in the simulation.”
Hamilton went on to say:
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
The report explains how programmers amended the algorithms to ensure that the AI would not kill the operator under any circumstances. The system, as it was trained to do, obeyed, but then calculated that it was most strategic to destroy the communication towers so that any overrides from the operator would not be received, thereby increasing its likelihood of killing its target. Col. Hamilton said in conclusion, “You can’t have a conversation about artificial intelligence, intelligence, machine learning, [and] autonomy if you’re not going to talk about ethics and AI.”
These dystopian perspectives don’t seem isolated. I recently had lunch with a tech sector venture capitalist who has numerous patents in this field. As a Christian, he was facing a real struggle. On the one hand was the impulse for discovery of the possibilities God had put into the creation—something to which his entire vocation has been focused. On the other hand, was his recognition of the fear and lack of joy among most of his Ph.D. colleagues that their discoveries might possibly be used for great evil as well as great good. What might help us find a solution to diseases like cancer might also end up inventing new and even more effective ways of killing.
A Globe and Mail op-ed this week urged readers to avoid the two extremes in responding to the “breathless commentary” on AI. It noted that today’s AI tools “are fun to play with and can astonish us with their output, [but] they are not operating anywhere near human intelligence.” It urged government regulation to protect the gullible masses from acting irrationally on their fears of the unknown, only to further the pockets of billionaires.
I’m not convinced that will solve much. Free market creativity and innovation will outpace any government’s ability to regulate it eleven times out of ten. Regulation can’t protect us from the injustice and victimization inherent in this process. That's not to say there is not a new emerging field of consumer protection and disclosure that is required. Honest weights and measures have been a precondition of a free economy since time immemorial, and new ways of communicating require adaptation.
One important distinction to keep in mind is between AI algorithms that organise and those that predict. Much of AI involves taking existing data and reorganising it very quickly. This is quickly transforming the practice of law and investments. It is helping medicine incorporate thousands more data points than a doctor can possibly absorb to make better decisions within the time-constraints that medical conditions impose. It can help us see patterns that are very complex that would probably escape the notice of humans for a very long time.
But taking and reorganising data is different from predicting what data might come next. Predictive algorithms use existing data not to describe what already is, but to fill in what might come next. At Cardus, we recently tested AI’s ability to write accurate biographies of our executive team. We found the results very convincing, except for the key factual error that was contained in each of the five we tried. Each of us had a job or degree ascribed to us that was not accurate but, in every case, they were extremely credible–the sort of job or degree that was suitable to the person if they would have had the opportunity.
AI is able to perform certain cognitive and rational functions more efficiently than the human brain. So it can evaluate and even reason to some degree (if using assigned point systems to weigh different options is the same as reasoning) in a manner that seems to be better than humans.
Moral reasoning, while it is reasoning, is never simply cognitive. The conscience and the soul play a part in our morality and that is absent from AI reasoning. The military tells the drone that the number one objective is to destroy the SAM and it does so within the constraints you give it. There is no assumed constraint beside that which is articulated. For humans, we have constraints and a form of knowledge that shapes us that isn’t articulated but comes from our existence as human beings. The lack of conscience or soul means that AI will not feel the God-given restraints of guilt that humans experience. When hardened criminals lose all hesitation and conscience to carry out their evil deeds, we describe them as having lost all humanity. AI will never have that humanity.
So, to the degree we will come to rely on AI information without human agency in the process, we need to be sure to consciously audit both the processes and the truth. I have no understanding what in the algorithm prompted the insertion of a mistaken degree or job in each of my colleagues’ resumes. What I find remarkable (and scary) is how credible the mistakes were. None of us were working at Disney World, yet the AI attributed jobs to us at comparable organisations and degrees from universities with which we had connections. That makes the mistakes much harder to spot and the results even more dangerous. If truth matters, we are a long way from being able to rely on AI’s predictive processes to provide it.
But truth isn’t the only thing that matters. Even if we get to the point where AI can reliably provide us information, that doesn’t make it human. Since biblical times, truth, beauty, and goodness have been a triad under which human flourishing has been framed. Our post-modern times are characterised by a diminished focus on truth and an increased focus on what is good and beautiful. Feeling good is far more important to most people than what is true. And while that deserves a digression of its own, right now AI isn’t thriving at those aspects either.
All of which to say, from where I sit, AI might make for good mathematics and have great potential, but we need to figure out how to filter it through moral and ethical frameworks before it will be reliably contributing to a flourishing society.