Clémence Smith discusses the dangers of allowing voice assistants into our homes.
Technology has crept into every aspect of society: instantaneous communication and boundless information have become just another mundane part of life. Aspects of internet culture help us to constantly re-evaluate the status quo. Memes, in particular, highlight anxieties surrounding the technology that generates them. The “iPad Kid” phenomenon, for example, denounces the use of screens as pacifiers: children remain transfixed by the flickering images on their tablets, which seemingly keep them out of harm’s way. Unfortunately, technology can also be dangerous: the rise of AI voice assistants gives technology newfound independence at the heart of the family home. In late December, an Alexa told a ten-year-old girl to touch a live plug with a penny in response to her request for a challenge to fulfil. Thankfully, the girl’s mother intervened before any serious physical harm was inflicted. Nevertheless, one is left wondering whether these smart assistants cause more problems than they remedy. Should we really welcome AI into our homes?
A cacophony of disembodied voices fills the 21st century: from the radio to the telephone and podcasts, AI voice assistants seem to blend in. They are given a name, a recognisable voice and even, one could argue, a personality. What lies behind this approachable interface, however, is the dark, sprawling internet: the Alexa device suggested the penny challenge as it came up in results from an internet search. Voice assistants synthesise information found on the internet without foreseeing the consequences of their words on the physical world. Although Amazon patched the “error”, trying to prevent such scenarios from repeating themselves is like playing whack-a-mole, as the internet’s ever-changing content persistently overwhelms the individual.
Voice assistants’ relationships with customers are equally complex. Many have raised concerns over these devices listening in on private conversations. Companies such as Amazon argue that establishing a “wake word” ensures that day-to-day interactions do not pass through the device’s system. This feature, though, does not account for the slipperiness of language, and the possibility of the device misinterpreting what is being said.
Voice assistants synthesise information found on the internet without foreseeing the consequences of their words on the physical world.
Furthermore, despite companies’ emphasis on security, full protection from hackers is impossible. Like the use of computer webcams to spy on strangers, hacked voice assistants have been found recording and distributing private conversations without participants’ knowledge. The permeability of AI makes its integration into private spaces a cause for concern. Amazon’s newest AI robot Astro can, according to The Guardian, “patrol your home, investigate disturbances and send an alert if it detects an unrecognised person”. Companies sell consumers what seems to be a Catch-22: “trust us with your sensitive information and we will heighten your security”, they seem to say. Carolyn Bunting, the CEO of Internet Matters, argues that individuals should not be coaxed into giving away private information without knowing “where it’s going, who’s holding it or how it’s being used”.
As the pandemic has led to an increase in the use of technology at home for school, work, and leisure, one must remember that devices don’t inhabit vacuums. Like the individuals that design them, they have soft underbellies which may only become apparent once a wound has been inflicted. Although voice assistants may seem trustworthy, it is important to take their advice with a pinch of salt, particularly where younger children are concerned. Whether it is worth risking one’s privacy for technological shortcuts, however, is for the individual to decide.