OpenAI begins training next AI model as it battles security issues

Unlock Editor’s Digest for free

OpenAI said it had begun training its next-generation artificial intelligence software, even as the startup backtracked on earlier claims that it wanted to build “superintelligent” systems that are smarter than humans.

The San Francisco-based company said Tuesday that it had begun producing a new artificial intelligence system “to take us to the next level of capabilities” and that its development would be overseen by a new safety committee.

But as OpenAI moves rapidly forward in AI development, a senior OpenAI executive appeared to backtrack on earlier comments by its CEO, Sam Altman, that his goal was ultimately to build a “superintelligence” far more advanced than humans.

Anna Makanju, vice president of global affairs at OpenAI, told the Financial Times in an interview that its “mission” was to build artificial general intelligence capable of performing “cognitive tasks that are what a human could do today.”

“Our mission is to build AGI; “I wouldn’t say our mission is to build superintelligence,” Makanju said. “Superintelligence is a technology that will be much smarter than humans on Earth.”

Altman told the Financial Times in November that he spent half his time researching “how to build superintelligence.”

Liz Bourgeois, a spokesperson for OpenAI, said superintelligence was not the company’s “mission.”

“Our mission is for an AGI to be beneficial to humanity,” he said, following the initial publication of the FT article on Tuesday. “To achieve this, we also study superintelligence, which we generally consider systems even smarter than AGI.” She disputed any suggestion that the two were in conflict.

While defending competition from Google’s Gemini and Elon Musk’s xAI startup, OpenAI is trying to reassure policymakers that it is prioritizing responsible AI development after several high-level security researchers resigned this month. .

Its new committee will be led by Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman, and will report to the three remaining board members.

The company did not say what the follow-up to GPT-4, which powers its ChatGPT app and received a big update two weeks ago, might do or when it would be released.

Earlier this month, OpenAI disbanded its so-called super-alignment team, tasked with focusing on the security of potentially super-intelligent systems, after Ilya Sutskever, the team’s leader and co-founder of the company, resigned.

Sutskever’s departure came months after he led a coup against Altman in November that ultimately proved unsuccessful.

The closure of the superalignment team has caused several employees to leave the company, including Jan Leike, another senior AI security researcher.

Makanju emphasized that work is still being done on the “long-term possibilities” of AI “even if they are theoretical.”

“AGI does not exist yet,” Makanju added, saying such technology would not be launched until it was secure.

Training is the main step in learning an artificial intelligence model, based on a huge volume of data and information provided to it. Once you have digested the data and its performance has improved, the model is validated and tested before being deployed in products or applications.

This lengthy and highly technical process means that OpenAI’s new model may not become a tangible product for many months.

Additional reporting by Madhumita Murgia in London

Video: AI: blessing or curse for humanity? | FT Technology