A group of tech titans, including billionaire entrepreneur Elon Musk, called for a halt to the development of powerful artificial intelligence (AI) systems on Wednesday to give experts time to assure their security.
Numerous notable artificial intelligence (AI) researchers have expressed worry about the “profound risks” that these machines pose to society and humanity in an open letter encouraging AI laboratories to study big AI systems.
The letter claims that machine learning systems are currently being developed and deployed in AI labs in an “out-of-control race” that “no one — not even their inventors — can understand.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The goal of AI developers everywhere is to ensure that these formidable AI systems are given the time and space they need for researchers to validate their safety.
Several notable AI scholars and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque, as well as author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, lawmaker Andrew Yang, and others, have signed the petition.
The main inspiration for the letter came from the publication of GPT-4 by the San Francisco-based business OpenAI.
The business claims that its most recent model is significantly more potent than its previous one, which was used to power ChatGPT, a bot that can produce long passages of text in response to the briefest of instructions.
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
A founding investor in OpenAI, Musk served on the organization’s board for several years, and his automaker Tesla develops AI systems to support, among other things, its self-driving technology.
Signatories to the letter, which was published by the Musk-funded Future of Life Institute, included renowned detractors as well as OpenAI rivals like Emad Mostaque, the head of Stability AI.
Sam Altman, the creator of OpenAI, was quoted in the letter as saying in a blog post that “it may be desirable to have an independent evaluation before commencing to train subsequent systems” at some time.
“We agree. That point is now,” the authors of the open letter wrote.
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
They demanded that if businesses refused to cooperate, governments should step in and enforce a moratorium.
During the next six months, safety procedures, AI governance systems, and a new research agenda should be developed to make AI systems more precise, safe, and “trustworthy and loyal.”
The risks identified by GPT-4 were not covered in the letter.
Read Also: Elon Musk plans to build new AI Chatbot.
Source: thenews.com