AI: you’re about to lose more than just your job
John Bryce discusses AI, the future, and what it means to be human.
The prevalence of ChatGPT has taken some parts of the internet by storm, with the startup valued at approximately $29 billion in just over 2 months since its release. The chatbot has barely taken its feet off the starting blocks but is already skyrocketing towards unknown success. So, just what is ChatGPT, and is it about to replace every journalist and office worker, and are all emails going to be the regurgitations of an AI?
The concept of the chatbot is nothing new, the structure being that the user base interacts with the bot, and the bot, using a combination of user data and searches on the internet generates data, to provide the appearance of a true conversation. Even as a boy, I was able to “communicate” with bots like Evie, who you can still talk to today. These are generally considered rudimentary in both academic and civilian fields, and after over 20 years of abuse, the outputs often appear random and inconsistent. However, even then, in these youthful days of the chatbot, there were questions about the ethical nature of these bots and concerns over security. It has always seemed that the soulless eyes had their additional creepy factor, but I never truly felt at risk. It’s clear that Evie, Talking Ben and Talking Tom never took any white collar jobs and never created any serious security concerns (until they did), rather, they were games for children, not to be taken seriously.
What is it about ChatGPT that makes it special?
ChatGPT takes the blossoming concepts of these early 21st-century curiosities and turns them into a hulking, and terrifying, system of sorting and data analysis. The service is able to churn out answers far more complex than other services with far greater accuracy- it can maintain thousands of inputs to create an impressive sense of reality to these outputs, even without the user inputs of Evie and other older systems. Rather, ChatGPT is much more effective at scouring the internet for data to produce more developed results- but it has a character that is almost unique to the way it functions. Rather than a regurgitation, where the system simply finds something that vaguely fits with the context of the conversation, ChatGPT makes use of a complex prediction algorithm. The algorithm is where GPT gets its name, Generative pre-training transformer.
The mechanism is not only able to interpret strings of words and sentences but entire paragraphs, building context and concepts that provide it with a more informed answer. The encoder is also able to build a relationship between words and phrases, connecting them together where necessary. This is a serious step up for AI- it enables them to understand nuance, the immaterial and in many ways is able to understand human concepts that were previously beyond the reach of AI- they no longer simply read a script, there is an evolution towards these mechanisms and systems truly understanding the weight and meaning that humans apply to words.
ChatGPT also has a Supervised Fine Tuning (SFT) system, meaning that a group of contractors provide the bot with inputs and create coordinating outputs-this supervised learning helps the bot make some elementary connections, speeding up the learning process and enabling the engineers a substantial level of control over the way in which the bot responds, not only to make it legible but also to prevent any particularlyoff-putting responses, which some AI have previously fallen victim to.
Does this mean that ChatGPT is about to take our jobs?
Although there has been a lot of hype around the copywriters, bloggers, artists and journalists losing their jobs to this AI the current iteration simply isn’t up to the task- the mechanisms remain vulnerable and exploitable, and when asked to research complex topics, the AI is unable to create accurate data. Some simple tasks, such as writing descriptions for holiday destinations or describing each shade of orange in the Dulux catalogue may soon fall into the maw of ChatGPT, its weaknesses are quickly exposed for emerging topics and especially topics that require expert opinion.
This movement in technology can appear terrifying, that AI could replace real roles or even make people wholly redundant in some crafts, but this is not an unforeseen consequence nor does it eliminate a host of jobs in writing and editing that require a human touch. This is not to say that concurrent iterations of the technology may creep into these fields, however. Should ChatGPT gain the ability to survey the real world and generate unique responses, more complex tasks may become available to it. While currently, the system is little more than a brain with an internet connection, by giving the system ears and eyes, by expanding its reach and access, there is scope that we may push ourselves into the unimaginable- entire sectors of work relegated to the instantaneous generations of a supercomputer operating many millions of times faster then our brains and fingers can.
One of the greatest questions that this poses is philosophical- when an AI is able to create information that we can process, great works akin to Kant and Marcus Aurelius of its own creation, there becomes a very real concern about the AI becoming live.
One of the greatest questions that this poses is philosophical- when an AI is able to create information that we can process, great works akin to Kant and Marcus Aurelius of its own creation, there becomes a very real concern about the AI becoming live. Increasingly, we see AI that, after a little training, are able to form sentences that have never existed previously, and the AI is advanced enough to process the information and know that it makes sense. And with humanity so unsure of what it means to be human, perhaps this emerging force is one of the greatest threats of all.
More than we bargained for
As ever, there is no such thing as a free lunch. While ChatGPT is an interesting free tool to play with and generate feverish conversations, it is pertinent to turn to what we lose in response. We understand that in the age of the internet, the free services we use such as Google and Facebook process our data, turn it into something profitable, and sell it. This has expanded across our lives- our entertainment, our food choices, where we go, where we work- all this information is fed into monolithic corporations. The manner in which this is done is generally not by selling your personal information outright, that’s too valuable, but instead sell what they believe to be your most likely moves- which shops you are most likely to walk past on your way to work if you are pregnant, sick, young, old, disabled, your diet, dating profile and everything else is all up for grabs.
Not only do these systems monitor us, but they also directly influence us. They can alter who we speak to and when, by changing the flow of our social media. They can silence political figures with the flick of a button, or convince you that perhaps veganism is worth a shot after all. Not only can these companies advertise to us, but they can also change who we are. They threaten our free will and the illusion of choice, and we sit on the cusp of collapsing into a rabbit hole of data collection and processing that we may never be able to dismantle.
The structured learning of ChatGPT means that it produces only “acceptable” answers, on terms set by the owners.
But what does ChatGPT have to do with Google, Facebook, Amazon or these other data aggregators? OpenAI, the parent company that also owns DALL-E, an AI generator of Art, is owned by figures like Elon Musk, and Sam Altman (ex-CEO of Reddit, which has also been accused of collecting valuable information for profit.) And Peter Theil (a major investor in Facebook.) Put simply, these AI systems are learning how humans think, process information and ask questions, how the human brain functions and how it alters itself. The structured learning of ChatGPT means that it produces only “acceptable” answers, on terms set by the owners. Not only is ChatGPT altering our behaviour by the way it functions, but by using it, the human user base is training ChatGPT as well as companies like Facebook and Microsoft how to better monitor them, and potentially threaten their privacy and autonomy.
Can we really treat these tools like a game to be played at our leisure when they are used in such a way? When these are fundamentally tools that operate to profit from our very beings, the data that makes us human. To manipulate and make us, the human consciousness profitable, there are much more serious questions than jobs at risk- and ChatGPT is just the beginning.