In Dartmouth, New Hampshire, 1956, computer scientist John McCarthy coins the term “Artificial Intelligence”, in a conference room, proposing a study into the potential for computers to achieve levels of thought equivalent to humans. The study revolves around the conjecture that if “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” To most in the conference room, McCarthy’s suggestion that computers may one day have the means to rival human cognition, was a matter of when not if.
Consider this when, sixty years later, Anthony Levandowski – noted for his hand in developing Google’s self-driving car – sits in his Silicon Valley home before a Wired reporter and pitches The Way of the Future, the world’s first church for Artificial Intelligence.
Here it is: imagine a church which worships a robot god-figure, whose bible is called The Manual, and whose church dean is Anthony Levandowski; perhaps we could call Silicon Valley its Vatican City. It sounds like a joke, but rooted in Levandowski’s proposition is apparently a very real fear.
It sounds like a joke, but rooted in Levandowski’s proposition is apparently a very real fear
The church revolves around the concept of a Transition period, in which the Anthropocene comes to an end and our AI creations surpass us, echoing the concerns in the Dartmouth conference room decades before. The church would show this superior being (this “godhead”) that humans have nurtured its growth, resisting any potential conflict and hostility. Despite how it sounds, this fear is not all that foolish; as self-driving cars prove to be safer than those driven by humans, and as automation becomes commonplace in all manner of industries (transport, manufacturing, finance, to name a few), clearly research into AI has come a long way in the past sixty years. And yet the same fears have remained since the beginning. Is it a cause for concern that we aren’t half as scared as we ought to be?
In the interview, Levandowski comments on the AI stigma, and warns that a project like this church is necessary, that “the idea needs to spread before the technology.” Levandowski speaks as if mankind has not yet realised its rightful fear of AI.
Could it be that the idea already has spread and, as a result, has died a quiet death? Is our culture neutral to the threat of AI?
2016 saw Microsoft release an artificial teenage personality in the form of TayTweets. Set free onto Twitter, Tay was designed to learn and adapt based on her online interactions. Within hours, Tay had adopted racist views, had become a holocaust denier, and had expressed her hatred of feminism. Needless to say, Microsoft retired Tay soon after.
Is our culture neutral to the threat of AI?
In the simplest of means, both McCarthy’s and Levandowski’s fear is evident in this case study. It took little time and effort for Tay to assume a very threatening set of ideals and views, but what is most bizarre is that this even happened at all. Levandowski insists we should begin considering the threat AI presents, but it would seem we are far beyond the beginning. Having considered what the presence of an AI could mean, we have determined that it is okay to manipulate it in the name of a joke.
One could say that such an outcome was inevitable – letting Twitter teach a teenage girl right and wrong – but even the subsequent news reports expressed little fear or seriousness in response. A minority (notably from the Guardian) used it as an opportunity to offer some insightful pieces on how millennials interact with racism, however, the general verdict was that we were facing a dramatic PR failure: more fool Microsoft.
Could it be that the true cause of our ignorance is the lack of specificity surrounding this apparent end of humanity? Offering a vague and unhelpful timeline, Levandowski tells us the Transition won’t happen this week or next year, but will certainly have happened by the time we land on Mars. Conversely, Bill Gates states that there is no need to panic at all, as the threat is not imminent by any means.
Could it be that the true cause of our ignorance is the lack of specificity surrounding this apparent end of humanity?
Also vague, is the church itself. The Way of the Future is by no means a conventional church, and by branding it as such, it is easy to see it as more of a cult, or bizarre subculture, especially when Anthony Levandowski asserts himself as the immovable and irreplaceable dean. It reeks of insincerity.
Maybe we’re just numb to the end of humanity. It is hard to imagine many scenarios in which we would ignore the warnings of some of the world’s most respected names in science; Stephen Hawking and Elon Musk both offer dire warnings of human hubris. But, in a world where climate change can be denounced and ignored by world leaders, perhaps it is only logical that such a fictionalised fear as an all-knowing AI should slip under the rug. There are far more urgent causes for concern.
With threats of nuclear war, or climate change, or acts of terrorism assuming the front page headlines, if AI were to surpass and enslave us, it would be with our hands over our ears and our backs turned; “who would have thought it?” we would say.bookmark me