Outsourcing Conscience: A Response to Yuval Noah Harari

When Yuval Noah Harari (author of best-selling books Sapiens: A Brief History of Humankind, 2011 and Homo Deus: A Brief History of Tomorrow, 2017) thinks about artificial intelligence, he doesn’t picture robots gaining sentience and exterminating humanity. He does picture the increasing irrelevance of certain classes, growing global inequality, and a shift in authority from humanity to data-driven algorithms. The last of these is not really a prediction, but a fairly linear extrapolation of historical technological trends.

Technology develops because of the human drive to make work simpler, more efficient, or more effective. Pushing back against the curse (Gen. 3:17-19), men have managed to leverage their unique rational capacities in mitigating some consequences of sin. Or at least it seems that way. Perhaps technology has just permitted men to redistribute the effects of the curse. And that may be one of the reasons why it is difficult for a rich man to enter the kingdom of heaven – he does not feel the sweat of his face in bringing forth thorns and thistles. On the other hand, the insatiability of men remains as a reminder of the first sin. Efficiency should mean an opportunity to work less, but it is often used instead as a means to produce more.

By now we have moved far beyond the simple jigs that automate and optimize mechanical tasks. The questions we ask of technology are no longer simply quantitative, but also qualitative. Having mastered the realm of objectivity, technology now promises to help us deal with our subjectivity. Instead of asking, “How can I do this?” we have begun to ask, “What should I do?” Data is invaluable because it permits humanity to outsource the most difficult task of decision-making.

This already happens extensively in trivial ways. Netflix offers recommendations that match your watching history. Amazon suggests products that complement products you have already purchased. Yelp lets you tap into to the experiences and opinions of countless strangers as you choose a restaurant. But even in its infancy, information technology was promising to make much more critical decisions for us. In 1959, a couple of Stanford students used a mainframe and punch cards with survey data to match prospective romantic partners. Now by swiping left or right you can not only help build the enormous data-set but benefit from its algorithmic matchmaking output in real-time.

Harari is wary of these advances because he perceives the potential for manipulation. Every algorithm has been written by somebody, and not everybody has humanity’s best interests in mind. We are nonetheless extremely susceptible to technological suggestion because our expectations are so low. The computers just need to make somewhat better decisions than us most of the time in order for us to find them to be trustworthy. Considering how often we make bad decisions that’s a minor hurdle.

That susceptibility, however, tells us more about ourselves than Harari observes. In the first place, our interest in reliable decision-making belies our craving for authority. We learn quickly in life that people are untrustworthy and easily tempted to abuse authority. But that cannot erase our resonance with the ordered character of creation. We are meant to be under authority, and when artificial intelligence promises that authority without the vicissitudes of earthly fathers, we are happily imprinted.

Still, a more damning fact about humanity underlies that susceptibility. Unmistakably aware that “none is righteous, no, not one” (Ro. 3:10), we are always in pursuit of acquittal. That makes the promise of sound decision-making extremely attractive. By outsourcing our consciences to artificial intelligence, we have so much to gain. We gain certainty in the face of our own mixed feelings. Combined with accurate biometric sensors, technology could even correct for our changing feelings and help us to make decisions that will minimize the sensation of guilt. We also gain a scapegoat. Our love for having someone to blame should surprise no one, exhibited first in the Garden of Eden and now every time a child excuses himself by saying, “My brother told me to do it.” What could be better than the victimless blame-shifting afforded by having a computer make your decisions?

All of that is enough to warrant caution. If technology is used to numb or deflect pangs of conscience, then it has indeed become an instrument of the devil. But more fundamentally, we should know better because sound, objective decision-making could never favor humanity. Suppose you could gather all the data and crunch all the numbers. Let us say that you can measure not just actions but also motives. You would discover only one solution to all the ethical questions posed by humanity – eternal judgment. After all, a purely objective and consistent moral system must finally resemble God’s Holy Law. And by that standard, there is only one possible outcome. Artificial intelligence could not choose life in the end. For that, what we need is an authority who favors humanity against better judgment and in spite of the data.