This is also what the CEO of OpenAI said

C'est également ce qu'a déclaré le PDG d'OpenAI

Artificial intelligence (AI) has taken a prominent place in our society, but experts in the field are sounding a warning. Indeed, the uncontrolled and irresponsible use of AI could lead to the extinction of humanity. Recognized names such as OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei are joining the call to address the looming danger and mitigate the risks associated with AI. Here are their concerns and why it's important to consider AI as a global priority.

The risks of artificial intelligence

Among the experts sounding the alarm is Sam Altman, CEO of OpenAI. According to him and other experts, the danger lies not so much in a superintelligence that would dominate humanity, but rather in the consequences of irresponsible use of algorithms at work and in everyday life. One of the main risks concerns interference in the dissemination of “fake news” and the manipulation of public opinion. Indeed, artificial intelligence can harm humanity by creating channels of disinformation.

The signatories of the call emphasize the importance of preparing for these emerging risks. The extensive use of AI leads to a revolution in many areas, but at the same time poses serious problems. AI permeates various aspects of social, economic, financial, political, educational and ethical life. Experts agree on the need to manage these situations and take steps to address the challenges posed by AI, such as the production of “fake news” or the control of self-driving cars.

The call for a global priority

Altman, Hassabis and Amodei recently met with U.S. President Joe Biden and Vice President Kamala Harris to discuss artificial intelligence. After the meeting, Mr. Altman testified before the Senate, warning that the risks associated with advanced AI were serious enough to warrant government intervention. He said these risks required precise regulation to prevent any harm. However, experts not only warned of the dangers of the technology, but also offered concrete solutions for responsible management of advanced AI systems.

Artificial intelligence experts warn that the risk of humanity's extinction should be considered a global priority. AI has the potential to significantly influence the destiny of humanity, so it is essential to address these risks urgently. In a short letter published by the Center for AI Safety (Cais), we can read the following

“Mitigation of extinction risk from artificial intelligence should be a global priority alongside other society-wide risks such as pandemics and nuclear war.”

Signatories to the letter include Sam Altman, CEO of OpenAI, also known for creating ChatGPT, as well as Geoffrey Hinton and Yoshua Bengio, considered pioneers in the field of AI. Geoffrey Hinton is commonly considered the “godfather of AI”, currently describing the technology as “scary”. Yoshua Bengio, for his part, professor of computer science at the University of Montreal and one of the leading experts in this field, has already expressed his concerns about the risks associated with this technology.

artificial intelligence

The view of Sam Altman, CEO of OpenAI

Sam Altman is the CEO of OpenAI, the company responsible for creating ChatGPT, the famous chatbot that catalyzed public interest in artificial intelligence. Along with Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic, Altman prophesied the risk of human extinction. But what is the meaning of this statement? The European Parliament's vote on the AI ​​Act, the world's first regulation on artificial intelligence, will take place from June 12 to 15. It is interesting to note that at the very moment when a major institution is preparing to restrict freedom of action and economic developments linked to artificial intelligence, Altman is speaking out against his own technology. This is a real paradox. At first glance, this position seems meaningless. Why this unexpected statement?

Altman's motivations

There are several possible theories to understand Altman's position. One is that calling artificial intelligence omnipotent is good publicity for the entire industry. The AI ​​sector is booming and limiting it seems to be a difficult, if not impossible, challenge to overcome. Another explanation could be economic. Indeed, data shows that the AI ​​race mainly concerns two nations, China and the United States. The first corresponds to a private investment of 13.4 billion dollars, to the second a total of 47 billion dollars. Altman's initiative could aim to limit dangerous competition and contain the reach of AI, at least in Europe and the United States. In fact, complex power plays lie behind such a statement. $800 billion is expected to be invested in AI in the future, generating an estimated value of around $6 trillion.

Responsible management of artificial intelligence

Experts offer several strategies for managing AI responsibly. They highlight the need for cooperation between industrial players in the field and for increased research into language models. They also suggest the creation of an international organization for AI safety, similar to the International Atomic Energy Agency (IAEA). Additionally, some emphasize the importance of formulating laws that would require creators of advanced AI models to register and obtain a government-regulated license.

The widespread use of generative artificial intelligence, with the proliferation of chatbots such as ChatGPT, has prompted many calls to evaluate the implications of developing such tools. Among them, an open letter also signed by Elon Musk last March raised the question of a six-month pause in the development of more powerful models of OpenAI's GPT-4. The goal is to allow time to develop shared security protocols for advanced artificial intelligence. Indeed, as the letter reports,

Powerful artificial intelligence systems should only be developed when it is certain that their effects will be positive and their risks will be manageable.