AI: More Questions than Answers?

14/12/2023

The AI Safety Summit leaves uncertainty regarding its governmental usage

Article Image

Image by James Duncan Davidson

By Meadow Lewis

ChatGPT, online interviews, and Bard. Living in the year 2023 means co-existing with the rapidity of technological advancement. Accompanying this ever-changing landscape is a great uncertainty, and in many cases scepticism and anxiety. To accommodate this, the first ever global summit on artificial intelligence took place on the 1 and 2 November.

The main goal of the summit was to outline the dangers of AI, whilst seeking to attenuate them. Additional objectives centred on international collaboration for safety (namely how national and international frame-works can be supported), and ways to cooperatively evaluate its abilities. This has been deemed a “world first agreement”, reducing competitive pressures between key states like the USA, UK, and China. Other names who were present include: Michelle Donelan (the UK’s technology secretary), Ursula Von Der Leyen (the European Commission president), and Giorgia Meloni (the Prime Minister of Italy). Those who did not appear were Justin Trudeau (the Prime Minister of Canada), Emmanuel Macron (the president of France), and Joe Biden (president of the USA). In many ways, their lack of appearance could indicate something more sinister than merely a busy schedule, suggesting a general apathy towards the subject matter.

Outside of governmental figures, many technological CEOs were in attendance, most notably Elon Musk. Despite his current development of ‘Grok’, an AI bot intended to challenge ChatGPT, Musk has been transparent in his warning against AI. He stated, “for the first time we have something which is going to be smarter than the smartest human”, rendering such technology “one of the biggest threats”. Acting on his fears, Musk founded OpenAI with Sam Altman in 2018, which is a non-profit research lab that makes its software open-source, preventing it from being controlled by a single person or corporation. This concept of reducing AI as a tool of individual power, whilst also broadening the market to ensure consistent checks and balances, is something that pervades the majority of AI discussions.

Commencing his own speech on the matter, Prime Minister Rishi Sunak declared “there is nothing in our foreseeable future that will be more transformative for our economies, our societies, and all our lives than the development of technologies like Artificial Intelligence”. With the government claiming to take the risks of AI as seriously as our current climate crisis, one may wonder what its explicit ramifications are. Over the next one to two years, it could be possible for AI systems to orchestrate cyber attacks on significant infrastructures, subsequently resulting in the loss of electricity and water for targeted areas. The potential for AI to construct biological weapons or conduct chemical warfare has also been stipulated. The most commonly suggested outcome of this has been future pandemics, fuelled by the intention of wiping out civilisation. Thus, the summit’s main conclusion was to subsidise safety research. Harking back to the point above regarding the climate crisis, each country will commit to nominating its own experts to conduct such research, inspired by the organisation of the Intergovernmental Panel on Climate Change. Adopting a similar sentiment, day two of the summit amounted to the agreed development of an independent ‘state of the science’ report, with this being led by the award-winning scientist, Yoshua Bengio.

Working in unison, Rishi Sunak and USA Vice President Kamala Harris will now formulate ‘world-leading’ safety institutes. These will seek to unify governments and AI creators, resulting in state-led testing of frontier models prior to their release. The institutes will also implement preventative measures to impede malicious actors from accessing dangerous AI tools. However, the logic behind this action is immediately flawed. With the outlined plan to reserve advanced AI for purely governmental usages, the threat of war and cyberattacks remains. The similarities between this plan and the one applied to nuclear weapons furthers this unsettling sentiment, intensified by recent nuclear intimidation. Does this then mean that soon enough, AI will also be utilised as a threat during conflicts? Additionally, will it merely be reserved to intensify certain global powers whilst being cut off from other countries, thus furthering global inequalities?

Further policies will be discussed at later summits, first in France and then in the Republic of Korea within the next six months. This means that the questions I have raised may soon have answers, no matter how bleak...