Skip to main content
The School

Are We All Wrong About AI? When Academics Challenge the Silicon Valley Dream

Are We All Wrong About AI? When Academics Challenge the Silicon Valley Dream

When it comes to understanding where AI is taking us as individuals and as a society, the narratives shaped by tech leaders often dominate. But what do the researchers who have been working on these technologies for over 25 years - for whom the release of ChatGPT in 2022 or DeepSeek in 2025 was nothing new - have to say? Not only do their perspectives bring much-needed rationality to the debate, but they also help us move beyond the hype to understand the true nature of AI and what we can realistically expect from it. According to them, we are still far from AI surpassing human intelligence (AGI) and generative AI models won’t get us there. 

Professor Michael Jordan opens the ‘AI, Science and Society’ conference organized by Hi!PARIS on February 2024

"What Sam Altman or Elon Musk have in mind is to take, for free, all the knowledge produced by humanity to create this super AGI (Artificial General Intelligence) thing - spending billions and trillions of dollars, in the process - and save humanity. This is nonsense!" With these were the words, Michael Jordan, opened the ‘AI, Science and Society’ conference organized by Hi! PARIS on February 6-7, 2025. He’s Professor of Computer Science and Statistics at the University of California, Berkeley/Inria. “This race to build the biggest LLM (Large Language Mode) on the hill is not feasible and is going to ruin us,” Jordan warned, denouncing what he termed the “Silicon Valley Fever Dream”.

Professor Jordan’s speech set the tone for an academic conference that gathered top researchers in AI, computer science, machine learning, and data science. The discussions at the Institut Polytechnique de Paris were in prelude to the international AI Action summit held in Paris on 10 and 11 February 2025. Jordan, who is also a member of the Hi! PARIS Scientific Advisory Board, was not alone in tempering the grandiose visions of tech leaders and the global fascination with the capabilities of the latest LLM models. 

“AI is only a piece of software, and we shouldn’t treat these models as thinking entities,” said Eric Xing, President of the Mohamed bin Zayed University of Artificial Intelligence, Professor of Computer Science at Carnegie Mellon University, highlighting the “great confusion about AI” and calling for a more rational approach to recent advancements in generative AI. 

For Bernhard Schölkopf, the Scientific Director at ELLIS Institute and Max Planck Tubingen: “LLMs are pure mechanistic models”, but they lack understanding, and the notion of truth or intention. In his presentation he showed "illusion" phenomena in generative AI models likes ChatGPT 4 or 01, meaning how AI models can misinterpret patterns in data, leading to outputs that are distorted or inaccurate. “LLMs are far from AGI. They are cultural learners, they tell stories and learn from stories – even it’s already a huge achievement”, he concluded.

AGI: Academics Don’t Buy It

In late 2024, Sam Altman, CEO of OpenAI, and Dario Amodei, CEO of Anthropic, two leading figures in AI development, wrote that AGI - AI systems surpassing human intellectual capabilities across all domains - could be achieved much faster than expected. Amodei suggested as early as 2026 (Machines of Loving Grace), while Altman predicted it within “a few thousand days” (The Intelligence Age). Other experts estimated at least a 50% chance of achieving AGI by 2030 (New York Times, December 11, 2024).

 

AI won’t break the secret of human intelligence for now, Michael Jordan

 

However, this optimism is far from being shared by some of the most prominent academics gathered at Polytechnique. Professor Michael Jordan, who “particularly hates the term AGI,” argued that “AI won’t break the secret of human intelligence for now.” As a statistician - “the mother of all disciplines” - he is not even comfortable with the term AI, preferring instead to speak of machine learning, a field he has researched since the 1980s.

“If we had stuck with calling it ‘machine learning’ instead of ‘AI,’ we’d be having a much more sober conversation today,” Jordan observed, noting that machine learning had already significantly impacted businesses in the early 2000s, where no one was talking about AI.

Yann LeCun, Chief AI Scientist at Meta and Professor of Computer Science at NYU, also dismissed the term AGI. “Human intelligence is, on the contrary, quite specialized,” he remarked. At Meta AGI is rather seen as a friend, under the acronym of AMI (French joke!) for Advanced Machine Intelligence. For AI to reach the level of human intelligence, which LeCun was somewhat skeptical about, it will require to rethink how LLMs are trained.

 

We are never going to reach human intelligence by training AI on text alone, Yann LeCun.

 

“We are never going to reach human intelligence by training AI on text alone. It’s just not going to happen, despite what people with financial interests claim,” he stated. What is missing, he explained, is the ability to learn world models from sensory inputs, similar to how babies learn through observation and interaction. “A four-year-old child has seen more data than the largest LLMs, in the form of visual perception,” (2 million optical nerve fibers carry about 1byte/sec each) he emphasized, illustrating how human learning is fundamentally different from AI training. It's worth remembering that this argument comes from a company – Meta – which is betting that AI smart glasses will revolutionize personal computing. It aims at positioning itself at the forefront of AI agents. 

LeCun also highlighted other key requirements for AI to evolve beyond LLMs: persistent memory, the ability to plan actions to achieve objectives, reasoning capabilities, and, crucially, systems that are controllable and “safe by design.”

Moving Beyond LLMs, Generative AI, and Reinforcement Learning

Many academics at the conference reminded the audience of the foundational mechanism behind Large Language Models (LLMs): next-word prediction. LLMs learn to predict the next word by recognizing patterns in vast amounts of text data. 

“We need to stop trying to predict what we can’t predict,” LeCun cautioned. Current LLMs models don’t have the right characteristics, and their predictions can lead to "exponential divergence" or "hallucinations", he warned. (AI hallucination is a phenomenon where an artificial intelligence system, particularly a large language model (LLM), generates false, misleading, or nonsensical information and presents it as factual)

Professor Jordan was also highly critical of generative AI. “These models can’t know everything. They can’t tell me what to do because what’s in my mind depends and on the context I’m in or will be in tomorrow, as well as my interactions with others. I don’t want to allow it to know because I want to continue to act like a human being. We are living in a world of uncertainty, and this is what makes us humans.”

 

We need to abandon generative models, as well as reinforcement-learning, Yann LeCun

 

Meta’s Chief AI Scientist went even further arguing that “we need to abandon generative models, as well as reinforcement-learning.” Yet, this is a process every tech company is working on right now as it is seen by the tech industry as allowing AI models to reach power intelligence capabilities and make breakthroughs in science. It is already at the core of the most advanced models like OpenAI new ChatGPT O1 or the Chinese model DeepSeek R1. “We know it's inefficient”, added LeCun in his address to decision makers, PhD students and the scientific community gathered in the IP amphitheater: “If you are interested in human-level AI, don’t work on LLMs!” 

Major problems remain in other fields like large-scale world model training, planning under uncertainty, or hierarchical training. “But it will require changing the current system of AI inference.” (AI inference is the process by which a trained machine learning model applies its knowledge to new, unseen data to make predictions, decisions, or draw conclusions. It is the operational phase of AI). 

LeCun was joined in this view by Jordan: “Current AI models have a poor notion of what it means to actually know something”, said Jordan in his keynote underlining the lack of reliability of these models. “If we want to bring a complementary kind of intelligence to humans, we need more than predictions and optimization”, which is at the core of LLMs.

Toward AI That Enhances Collective Intelligence For The Benefit Of All

“I don’t think there has ever been an era in history where a new field of technology has risen so much hype and hysteria around it”, continued Jordan. His observation came two weeks after the unprecedented panic in financial markets following the release of AI Chinese model DeepSeek. This provoked the largest single-day loss for a U.S. company in history with Nvidia's devaluation of $600 billion, and a total loss of nearly $1 trillion in value across U.S. exchanges. It also marked one of the most significant single-day losses in tech sector history.

 

We don't have a vision of what we want to do with this technology; Michael Jordan

 

For Jordan, this reveals a fundamental problem. “We don't have a vision of what we want to do with this technology.“There is this AI dream to build super intelligence that will figure everything out, and somehow create vast economic surplus in the long term. But it will first generate lots of money for tech companies which are marketing this dream.” 

To provide an answer to this lack of vision, Jordan made a strong plea to move from an individual usage of an AI-focused on decision-making or productivity, to a collective approach linking AI with microeconomics with the ultimate goal of creating systems that deliver real-world value. “We never hear about economics among the fields involved in AI, but you guys have to stand out”. The role of “machine learning economists”, Jordan claimed, is to find equilibria in markets that will lead to design regulation in order to increase social welfare. In his view, thinking beyond LLMs is being aware that collectives can do things that singles humans cannot. “When we come together we create culture, communication and markets that lead to value for everybody.” As long as the AI field will be held back by tech leaders aspirations to build the largest model and the most powerful intelligence, Jordan continued, it won’t be able to deliver its promises to improve everyone’s lives. 

His keynote address raised fundamental questions. If AI is the kind of innovation that will change the world, how could we ensure that its benefits are shared across society? Or, as NYU Stern School of Business Professor Scott Galloway analyzes in a recent article, how can consumers capture the value of this innovation instead of seeing it captured by private players and a few shareholders? “The dollar has put the public good in the back seat”, he writes.

Are Open Source Models The Future Of AI Progress?

If tech leaders are shaping the narratives around AI, they are also driving massive investments in the sector as the US “Stargate” project announced in January illustrates. Lead by OpenAI with other tech giants, including SoftBank, Oracle, Microsoft, Nvidia, and Arm, this is an unprecedented $500 billion investment plan in generative AI and AI infrastructure spread over four years. 

ChatGPT by OpenAI, Claude by Anthropic, or Gemini by Google are both market leaders in generative AI and proprietary models. Is that a problem? If we consider the views expressed at the ‘AI, Science and Society’ conference, it is. On this subject, Yan LeCun's speech was of course eagerly anticipated. Indeed,  Meta's Llama architecture is said to have played a significant role in DeepSeek's development. The answer from Meta’s Chief AI Scientist was straight to the point: “Culture and knowledge cannot be controlled by a few companies on the West Coast of the US or in China". He added that Open source AI platforms are necessary and they need to be built in a collaborative way.

In the aftermath of DeepSeek's breakthrough and the geopolitical and data security concerns it has raised, the danger for LeCun is that geopolitical rivalry will entice government to make the release of open-source models illegal. “Keeping its science secret is not an option and would be a huge mistake”, he declared. In his view, the DeepSeek affair above all validates Meta's strategy of freely sharing Llama's code to speed up the development of AI. “The open-source models are overtaking slowly but surely proprietary models”

The consensus around open-source models emerging from the conference may indicate that when it comes to creating prosperity for the whole world and breakthroughs for humanity, there is no point in competition. Rather, participants believed, we should build trust and collaboration. But is it realistic in today's political and geopolitical world, where tensions are rising ever higher?

Smaller is Better: Toward Responsible Scaling

Just as the scale of investment in AI companies is staggering, so too are the quantities of computing and energy resources required to build these super-powered AI models. For most of the researchers taking part in the conference, “working on developing smaller models is just as important as scaling”. as Schölkopf put it considering the climate change and energy issues. This trend will certainly be accelerated by DeepSeek's cost-effectiveness challenge to the dominance of US AI companies.

 

Working on developing smaller models is just as important as scaling, Bernhard Schölkopf

 

For David D. Cox, VP for AI models at IBM Research, and former Harvard University Professor of Natural Sciences and Engineering and Applied Sciences, AI has reached an inflection point regarding its negative externalities on climate change. LLMs cost in terms of energy supply “are growing at unsustainable rate”, said Cox pointing out that “our computing appetite” could outstrip the world energy production by 2035 or 2040. “We need to bend this energy curve”

Scaling down AI models to tailor them to specific uses is one of the solutions. Deploying generative AI applications in companies for productivity gains doesn’t require models that know “advanced physics”, underlined Cox while revealing that 80% of enterprise will have incorporated Gen AI into their business process by 2026 (Gartner). In this respect, it’s important to rely on open-source software if you want to customize AI models for different needs and accelerate adoption, stressed Cox, who is also a strong advocate for advancing open science. Building efficient, open, customizable and performant AI models is a virtuous circle, he insisted. 

As the discussion on open-source AI, responsible scaling, and economic integration grows louder, one thing appeared clear at the conference: the future of AI should be driven by societal needs, not just market forces. If AI is to fulfill its promise, the global community must move beyond grandiose ambitions and build technology that truly benefits humanity.