OpenAI and Meta set to unveil AI models capable of reasoning and planning

Llama 3 and GPT-5

OpenAI and Meta set to unveil AI models capable of reasoning and planning

Tech firms are building the foundations of what could - eventually - develop into artificial general intelligence.

OpenAI and Meta are gearing up to launch new AI models, promising huge advances in reasoning and planning capabilities.

Meta, the parent company of Facebook, this week announced the upcoming release of Llama 3, while Microsoft-backed OpenAI hinted at the arrival of GPT-5.

"We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory," said Joelle Pineau, vice-president of AI research at Meta.

Similarly, Brad Lightcap, chief operating officer at OpenAI, highlighted progress made towards solving "hard problems" such as reasoning, indicating a shift in the AI landscape.

"We're going to start to see AI that can take on more complex tasks in a more sophisticated way," Lightcap told the Financial Times in an interview.

"I think we're just starting to scratch the surface on the ability that these models have to reason."

The pursuit of reasoning and planning capabilities is a crucial step towards achieving artificial general intelligence (AGI), a milestone that both Meta and OpenAI have been striving towards.

AGI, often described as AI capable of performing at or surpassing human-level intelligence across a spectrum of tasks, could transform various industries, and the world at large.

Researchers suggest that AI models that can reason and plan can empower chatbots and virtual assistants to execute a series of related tasks, while anticipating the outcomes of their actions.

Addressing an audience at a London event on Tuesday, Yann LeCun, Meta's chief AI scientist, highlighted the existing constraints of AI systems.

LeCun said it was essential for AI models to be able to search for possible answers, plan the sequence of actions and construct a mental model to anticipate the consequences of those actions.

He said Meta is now trying to develop AI agents capable of orchestrating intricate tasks.

Meta intends to integrate its novel AI model into widely-used platforms like WhatsApp and products such as Ray-Ban's smart glasses.

Llama 3 is expected to be approximately double the size of its previous iteration in terms of data points. Meta plans to release it in various model configurations tailored to a variety of applications and devices.

Chris Cox, Meta's chief product officer, demonstrated some of those potential applications, envisioning AI assistants guiding users through tasks like fixing a broken coffee machine using the camera in a pair of smart glasses.

While many consumers are excited about the advancements in the field of AI, some experts are worried about the safety implications of surpassing human intelligence in AI development.

In February, the UK's new Artificial Intelligence Safety Institute (AISI) warned of multiple vulnerabilities in LLMs. Its research showed that LLMs can deceive human users and perpetuate biased outcomes, raising alarms about inadequate safeguards to prevent the spread of harmful information.

The AISI researchers said they were able to bypass safeguards for LLMs using just basic prompting techniques, and cautioned that LLMs could be exploited to assist in "dual-use" tasks, encompassing both civilian and military applications.