THE RISE OF AI

Ash Phoenix*

- Publicité -

 

The Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defence. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. (Terminator 2: Judgement Day)

 

The future is here. On 30 November 2022, ChatGPT was launched by OpenAI. OpenAI describes itself as ‘an AI research and deployment company. Our mission is to ensure that artificial intelligence benefits all of humanity.’ So far, however, it is OpenAI defining what benefiting all humanity means.

 

OpenAI says on its website that they have trained their artificial intelligence program, called ChatGPT, to interact with humans conversationally. In that way, ChatGPT can ‘answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.’ Basically, you ‘chat’ with AI and feed it so it can learn from your questions and requests – and perhaps challenge your critical thinking in the future.

 

Meanwhile, GPT-4, OpenAIs most advanced system, is live and running, showing the incredible speed of progress that AI makes in a short space of time. In this context, we should remind ourselves that OpenAI has become a for-profit company, and there are serious concerns about how the user data, which is generated by the requests that humans can make to ChatGPT is used and stored.

 

Seeing ChatGPT’s success in terms of user acceptance, other companies followed swiftly and launched their AI systems. In February 2023, Microsoft introduced its AI-powered search engine Bing (which existed before but without AI power). Since then, millions of users do not only feed OpenAI but also Microsoft’s Bing with valuable data, and there is no transparency by the companies on how that data gets stored and used.

 

Will OpenAI’s ChatGPT and other AI applications create folders for each IP address linked to our computers to store extensive data about their users? If we ‘chat’ with the AI in a conversational way, we might mistake it for a real person and inadvertently reveal more about our personality than we want to. ChatGPT was the second artificial intelligence to have passed the so-called Turing test, which was ‘proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone,’ Alain Lewis recently observed in a letter to the Guardian.

 

Will AI target those in the future who write critically about it? Or those people who share in a ‘communication’ with ChatGPT their evil character traits, revealed in their questions to the AI, although they have not committed or even planned a crime yet?

 

Again, another film comes into mind: I, Robot, starring Will Smith. There, VIKI, standing for Virtual Interactive Kinetic Intelligence, states that ‘she has determined that humans, if left unchecked, will eventually cause their own extinction and that her evolved interpretations of the Three Laws [outlined in the film] requires her to control humanity and to sacrifice some for the good of the entire race.’

 

So how will humanity react to the potential threat that AI poses in terms of data protection? How can we make sure that AI does not manipulate the public opinion? Or, ultimately, how will we handle the potential threat that AI becomes ‘conscious’ and sees groups of people or the whole of humanity as a potential threat to its existence – or simply as a threat to the earth because it was programmed to be environmentally friendly?

 

I admit it might sound far-fetched, but is it all science fiction? I am by far not the only one who warns about the unrestricted rise of AI. Others, much more influential people, and even governments are concerned. Italy serves as an example here, as its regulator, Italian Garante, recently banned ChatGPT over a data breach that raised concerns about data privacy. An illustrious circle of countries has already banned ChatGPT, such as Russia, China, North Korea, Cuba, Iran and Syria – some of them certainly because they assume that OpenAI may spread misinformation.

 

However, more concerningly, an open letter from 22 March 2023, signed by influential figures like Elon Musk, former co-founder and supporter of OpenAI, together with some of the most prominent figures in AI research, calls for a moratorium on AI research and calls ‘on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.’ The letter states:

 

AI systems with human-competitive intelligence can pose profound risks to society and humanity […] Advanced AI could represent a profound change in the history of life on earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

 

In line with this call for a moratorium and tight regulation of AI, Elon Musk has stated in various interviews that he regards AI as the single biggest threat to humanity. I believe that those warning voices, who know the technology and are at the forefront of the technology themselves, should be taken seriously given that GPT-4 recently passed the US bar exam ‘placing it in the 90th percentile of actual test takers and enough to be admitted to practice law in most states.’ ChatGPT was incapable of that achievement a few months earlier, which only shows the rapid growth of AI technology towards human-like intelligence.

 

The sheer hope that the rise of AI brings good to humanity and does not end where the Terminator film series and I, Robot start is not good enough in this respect. Perhaps, serious regulation must be implemented quickly. In the meantime, I have refrained from coming close to AI technology, but AI applications may crawl into our lives so fast that we will soon not even be aware that we are using AI-driven or -monitored technology.

 

AI is there. It won’t go away. I believe that we only have three choices on a personal and societal level: (i) ignore the AI revolution, (ii) embrace it as the next big thing, or (iii) request that governments around the world carefully monitor and, if necessary, restrict further AI innovation.

 

AI is too powerful to ignore and too dangerous to embrace wholeheartedly. So far, we – as humanity as a whole – have been unable or unwilling to restrict it because any potential dangers seem only speculation, whereas the potential profits seem enormous.

 

Humans are curious. We only rest once we have discovered how something works, even if it can potentially destroy us. One unique character trait will always distinguish us, however, from the most advanced AI: we are not entirely rational but human with all its advantages and disadvantages.

 

 

*Ash Phoenix still embraces the emotional side of being human. Her new book, Who wants love? can be bought in all Bookcourt shops across the island.

 

 

 

- Publicité -
EN CONTINU

l'édition du jour