With the development of Artificial Intelligence systems and their ever-expanding capabilities, we have found ourselves living in an exciting, yet potentially dangerous, new age. Evie Crossland explores the background behind AI, how its designers intend for it to grow, and what is needed to ensure that this is safe and wise.
Widely regarded as the ‘Godfather of Artificial Intelligence’, Geoffrey Hinton has quit his role at Google, fearing the rate of progress will outstrip our ability to control it. The cognitive psychologist and computer scientist informed the BBC that the dangers of AI chatbots were “quite scary”, explaining that although the chatbots many not be more intelligent than us currently, “they soon may be”.
Hinton’s resignation emphasises the urgency for an informed response to the developments in the field of AI. In an interview with ABC News’ Rebecca Jarvis, CEO of the artificial intelligence laboratory ‘OpenAI’, Sam Altman states, “getting this technology right and figuring out how to navigate the risks is super important to the future of humanity.”
Artificial Intelligence is here and it is developing fast
Gary Marcus, a former NYU professor and author, told the BBC that the imperative is to “act now… the number one lesson is that you don’t want to close the barn door after it’s left”. Artificial Intelligence is here and it is developing fast, understanding exactly what it is and how we want to use it will dictate the trajectory for the future.
So, what is AI?
Artificial intelligence tries to simulate a digital neural network – in other words, a brain. Just like a brain, the more information (in this case, data) is given to it, the more the network learns, and therefore becomes smarter.
Just like humans, AI systems learn from experience through a process known as ‘deep learning’: a theory pioneered by Geoffrey Hinton. Hinton’s research has laid the groundwork for Generative AI models, such as Chat GPT and Bard, systems which Hinton fears could achieve complex reasoning very soon:
“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning”, he said. “And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”
The need for regulation
With the exponential rate at which AI is developing, regulation is paramount. An open letter published by The Future of Life Institute, imploring AI labs to ‘’immediately pause for at least 6 months the training of AI systems more powerful that GPT-4’’, has been signed by numerous AI researchers, including Elon Musk.
“What we need is some international regulation”
However, Lord Martin Reese – co-founder of the centre of existential risk at Cambridge University Source – told the BBC that the sixth-month pause is “not enough”, emphasising the need to ‘‘campaign to try and get an issue of this kind put on the agenda for the G20 in India later this year’’. In Reese’s view, ‘what we need is some international regulation of the big multi-national companies which make these things.’
On Thursday 4th May, the tech bosses of Google, Microsoft and Open AI were summoned to the White House to discuss the risks of Artificial Intelligence, being told they had a “moral” duty to safeguard society. The White House informed the technology executives that the administration was “open to new regulations and legislation to cover artificial intelligence.”
Altman told reporters that White House executives were “surprisingly on the same page on what needs to happen”.
However, in a tweet posted by Meta’s chief artificial intelligence scientist, a man vehemently defendant of the benefits of AI, he emphasises that the responsibility for regulating AI must come from government: “I’m all in favor of technological advances benefiting everyone. But first, that’s a goal for politicians & democracy to achieve. Second, the mere possibility of unequal distribution is not a sufficient reason to stop the progress of science and the development of technology.”
Rebecca Johnson, an expert in tech ethics at the University of Sydney, told ABC News that his comments emphasise a dangerous reality within the AI world. She said, “people like Yann LeCunn divorce themselves from accountability and say it’s up to the politicians and got nothing to do with me… these companies say “Oh yeah, we believe in ethics”. OK, well, then enforce it at all levels of your company, including your chief scientist and chief CEO.”
“We have got to move away from these individualistic rock star perceptions of “You can’t touch me”.”
Vital for the well being of humanity at large
Debates over who must take sole moral responsibility for Artificial Intelligence are futile; a healthy conversation between both government officials and tech companies is vital for the well being of humanity at large. As Hinton stresses in his resignation statement to the New York Times, It is ‘’bad actors’’ who would use AI for ‘’bad things’’ that must concern us.
He told the BBC: “You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”
Hinton’s fear strikes particularly pertinent today, given Putin’s chilling statement six years ago:
“Artificial intelligence is the future, not only for Russia, but for all humankind… it comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
The risk AI poses to jobs
In his interview with ABC News’ Rebecca Jarvis, Sam Altman states: “It is going to eliminate a lot of current jobs, that’s true. We can make much better ones.”
Tech journalist, Chris Stokel-Walker, echoes a similar sentiment, stating that “so many jobs could be replaced by using AI because it’s so much quicker, so much more reliable. You don’t need to give it breaks. You don’t need to give it a pension. And for business people, that’s really vital.”
Carl Benedikt Frey, the future-of-work director at the Oxford Martin School, Oxford University, told BBC News, “The only thing I am sure of is that there is no way of knowing how many jobs will be replaced by generative AI”. However, in a report by investment bank, Goldman Sachs, Artificial Intelligence “could replace a quarter of work tasks in the US and Europe”, but may also “mean new jobs and a productivity boom.”
While the number of jobs replaced by AI remains unclear, one thing is unmistakeable: the job market will evolve significantly.
CNN’s Vanessa Yurkevich spoke to the professor of advanced media at Syracuse University, discussing that jobs that will likely be replaced by AI: “If you’re a middle manager you’re doomed, any kind of commodity sales person, report writers and journalists, accountants and bookkeepers and oddly enough, doctors”.
“They’re using it as kind of an outsourced brain to reduce the workload on themselves”
We are already witnessing the impact of these developments within artificial intelligence on the job market. Various law firms in the UK are using a version of a chat bot to draft the first draft of legal letters and to create arguments and court cases. According to tech journalist, Chris Stokel-Walker, “they’re using it as kind of an outsourced brain to reduce the workload on themselves.”
However, creating chat bots which do not spread misinformation and falsity is a challenge not yet conquered by tech companies. For the field of journalism, Artificial Intelligence poses an even greater threat to the spread of misinformation. A German Magazine, Die Aktuelle, recently published an AI-generated ‘interview’ with Formula 1 legend, Michael Schumacher. The Schumacher family are planning on taking legal action against the magazine, for artificially generating Schumacher’s responses.
Schumacher’s former teammate, Johnny Herbert, stated that the German Magazine’s actions were ‘’appalling’’.
The benefits of AI
While the threats of AI such as “bad actors” and misinformation pose a great risk to humanity, the benefits for education and the medical profession are undeniable. Sam Altman emphasises the importance AI Chatbots, such as ChatGPT, in providing “great individual learning for each student’” Altman goes on to state that the newest language model – GPT-4 (ChatGPT’s successor) – uses a “Socratic method educator”, which “teachers, not all, but many teachers really really love this and say it’s totally changing the way I teach my students, for the better.”
Positive innovations in artificial intelligence are also shown by researchers at Stanford University for improving the medical industry. Using an app, the researchers have developed an algorithm to detect a variety of diseases. The user would take a picture of their X-Ray and upload the image onto the app, which would provide probabilities for different diseases. However, decisions over which percentage we would be able to give a diagnosis and provide treatment for remains uncertain.
Similarly, in Birmingham, researchers are developing Artificial Intelligence to detect the early onset of Parkinson’s disease through voice changes. By sampling thousands of voice recordings and feeding them into the algorithm, the AI is trained to detect differences in the voice patterns of people with and without Parkinson’s. The accuracy of disease detection rated 99% in a lab study.
These studies and developments in artificial intelligence could revolutionise healthcare, and alleviate pressure on the healthcare system.
Ensuring we treat artificial intelligence with the seriousness it demands is vital
We are living on the precipice of a new era of human evolution; ensuring we treat artificial intelligence with the seriousness it demands is vital. A collective conversation about the society we want to create and how to harness AI to help us achieve it is necessary, or else we risk sleep-walking into a dystopian future, as envisaged in Frank Herbert’s ‘Dune’.
Featured image courtesy of Tara Winstead on Pexels. Image license found here. No changes were made to this image.
In-article image courtesy of Andrew Neel on Pexels and cottonbro studio on Pexels. Image license found here. No changes were made to this image.
For more content including uni news, reviews, entertainment, lifestyle, features and so much more, follow us on Twitter and Instagram, and like our Facebook page for more articles and information on how to get involved.
If you just can’t get enough of Features, like our Facebook as a reader or a contributor and follow us on Instagram.