Skip to main content
researchers exploring neural networks and constellations in the sky, on the top of a mountain

Illustration by Kim Roselier

AI Technology: On a Razor's Edge?

AI Technology: On a Razor's Edge?

As artificial intelligence reshapes industries, society grapples with its profound impact on creativity, governance, healthcare, misinformation, and the future of work. This year, the Hi! PARIS Center participates to the AI Action Summit, held in Paris on February 6, 7, 10 and 11. An opportunity to bring together five in-depth analyses from HEC Paris faculty, corresponding to the five main topics addressed at the Summit. From the geopolitical race for digital sovereignty to creative industries' struggle with generative AI, these perspectives provide a nuanced understanding of how AI is both an opportunity and a challenge, requiring careful regulation, ethical considerations, and strategic adaptation across sectors.

Hi! PARIS is co-founded by HEC Paris and IP Paris, and supported by the French government as one of the nine projects chosen for the "IA Clusters".

Structure

Part 1
Saving Lives in Intensive Care Thanks to AI
Pillar 1 of the five pillars of the 3rd AI Action Summit: "AI in the service of the general interest"According to the Summit organisers, AI should be seen as “a political technology to be directed towards economic, social and environmental progress”. Julien Grand-Clément explores a simple AI tool which, if integrated into the decision-making process in hospitals could save countless lives. Grand-Clément is Assistant Professor in Information Systems and Operations Management at HEC Paris and chair holder at the Hi! PARIS Center. 
Part 2
Balancing AI Technology with Decentralized Innovation
Pillar 2 of the five pillars of the 3rd AI Action Summit: "The Future of Work"The AI Summit’s second theme on labor focuses on promoting the “socially responsible use of artificial intelligence through sustained social dialogue.” To achieve this, the organizers have already begun to create an impactful network of observatories by connecting international, national, and private sector bodies to “improve sharing of knowledge”. They also aim to deploy AI “in the service of productivity, skills development and well-being at work”. In a just-published case study HEC researcher Aluna Wang examines VINCI’s AI transformation as a possible blueprint for other multinationals hoping to seamlessly integrate new generative AI technology into the workplace. 
Part 3
Is AI Threatening the Creative Industries?
Pillar 3 of the five pillars of the 3rd AI Action Summit: "Innovation and Culture"This theme aims to “boost technological excellence in the service of innovation and artistic creation”. In this line of idea, the Summit will entice participants to  discuss the sharing of “value and systems likely to facilitate access to the resources that are critical for the development of AI”. However, “while AI has already made it possible to speed up the creation and distribution of cultural goods, it sometimes calls into question the economic model for creation and the remuneration of intellectual property”. HEC Associate Professor Thomas Paris explores the uneasy relationship between creative industries and Artificial Intelligence.
Part 4
The Cost of Fake News in the Context of Meta’s Shift Away from Fact-Checking
The AI Action Summit’s fourth pillar is “Trust in Artificial Intelligence”. Organizers acknowledge the very real risks linked to the integration of AI systems. Participants will discuss the impact of AI on ethics, the fight against discrimination, “malicious use” of these technologies and security commitments by the historic AI pioneers. These debates will address major challenges of AI in terms information manipulation, challenges which HEC Professor David Restrepo has been exploring for several years. He reflects on the latest one, the decision by facebook founder, Mark Zuckerberg, to remove fact-checkers from Meta. This is an adapted version of a tribune first published by Forbes in January 2025. Restrepo also discusses the role of regulations designed to ensure that humans do not lose control over automated systems and that AI systems will do not diminish humanity and human agency.
Part 1

Saving Lives in Intensive Care Thanks to AI

Saving Lives in Intensive Care Thanks to AI
Artificial Intelligence
Published on:

Pillar 1 of the five pillars of the 3rd AI Action Summit: "AI in the service of the general interest"

According to the Summit organisers, AI should be seen as “a political technology to be directed towards economic, social and environmental progress”. Julien Grand-Clément explores a simple AI tool which, if integrated into the decision-making process in hospitals could save countless lives. Grand-Clément is Assistant Professor in Information Systems and Operations Management at HEC Paris and chair holder at the Hi! PARIS Center

When hospitalized patients’ health unexpectedly takes a turn for the worse, they are transferred to the intensive care unit (ICU), where they are more likely to die or remain for a considerable time. But our research could provide life-saving information to doctors before the patients’ health deteriorates. This is thanks to a mathematical model commonly used in Artificial Intelligence (AI) whose use could bring down the mortality rate in ICUs by 20%.

What if hospital doctors had a reliable way to identify the patients whose health was most likely to take a turn for the worse, and then proactively send those patients to the ICU? With almost 6 million patients admitted annually to ICUs in the United States, the question is anything but anodyne. Our research is based on nearly 300,000 hospitalizations in the Kaiser Permanente Northern California. Kaiser is recognized as one of America’s top hospitals in treatment for illnesses like leukemia and heart attacks. 

Its data indicated that by proactively transferring patients to ICUs, hospitals reduce the mortality risk and length of stays. But there is a risk of going too far. Indeed, other research indicates that if doctors transfer too many patients to these units they may become congested and the survival rate becomes negatively impacted. Should the ICUs be filled to capacity, this could mean that some patients who need ICU care are not able to obtain it.

Our research suggests that for a proactive ICU transfer policy to work, three policies should be instigated: arrival rates must be recalibrated; the number of nurses in the ICU should be reviewed; and decisions about the transfer of patients must be gauged according to their recovery rate. If these metrics are not aligned, doctors might not make the right transfer decisions. 

Creation of a Simulation Model for Hospitals

One of our key collaborators for this research, Gabriel Escobar, served as the regional director for hospital operations research at Kaiser Permanente Northern California. Kaiser provided us with unprecedented and anonymized hospitalization data on patients from 21 Kaiser Permanente facilities. Thanks to this information, we built a simulation model which mimics how an actual hospital works. This includes generating arrival and departure rates, the evolution of the patients’ condition, and every interaction they have with the system. With such micro-modeling, we can track the simulated patient as if (s)he were a real hospitalized patient. This enabled us to test different scenarios of arrivals and transfer policies.

To build our simulation model we used the mathematical framework called Markov Decision Process (MDP), a common practice in AI. This is a model for sequential decisions over time,  allowing users to inspect a sequence of decisions, and analyze how one choice influences the next one. The sequence is influenced only by earlier decisions, not by future ones. We thus designed an optimization method, based upon a machine learning model, to estimate the impact of various transfer policies. 

When we ran the model, we discovered that relatively small adjustments can have an impact on the mortality of the overall patient population. Given a certain way of transferring patients, we saw the estimated mortality rate could fall by 20 %!

AI Won’t Replace Human Decision-making in Hospitals

The question remains: should humans still be involved in ICU transfers, or should we rely solely on algorithms to do it? We believe these two methods could be complementary. Humans must have the final word. But their decisions could usefully be assisted by the recommendations from an algorithm. Our research seeks to encourage the implementation of simple transfer decision rules based on common health metrics summarizing the health conditions of the patients and certain thresholds. This type of threshold policy is extremely simple to deploy and readily interpretable. Using micro-modeling to understand a complicated enterprise and develop algorithms to assist in decision making can – and should - lead to better outcomes.

The original research article, “Robustness of Proactive Intensive Care Unit Transfer,” was co-signed by colleagues Carri W. Chan and Vineet Goyal (Columbia University), and Gabriel Escobar (research scientist at the Kaiser Permanente Northern California Division of Research and director of the Division of Research Systems Research Initiative). It was published in January 2023 in Operations Research.
See structure
Part 2

Balancing AI Technology with Decentralized Innovation

Balancing AI Technology with Decentralized Innovation
Artificial Intelligence
Published on:

Pillar 2 of the five pillars of the 3rd AI Action Summit: "The Future of Work"


The AI Summit’s second theme on labor focuses on promoting the “socially responsible use of artificial intelligence through sustained social dialogue.” To achieve this, the organizers have already begun to create an impactful network of observatories by connecting international, national, and private sector bodies to “improve sharing of knowledge”. They also aim to deploy AI “in the service of productivity, skills development and well-being at work”. In a just-published case study HEC researcher Aluna Wang examines VINCI’s AI transformation as a possible blueprint for other multinationals hoping to seamlessly integrate new generative AI technology into the workplace. 

employees holdong different tech and media and collaborating

Photo Credits: normaals/123rf

How can major companies develop AI skills among their employees while maintaining a decentralized structure? VINCI, a world leader in concessions, energy, and construction, offers compelling insights through its innovation hub, Leonard, and its groundbreaking AI projects. We analyze the ways this French multinational seeks to answer the challenges AI poses towards maintaining innovation, decentralization, and the upskilling of its employees. 

Our case study, published by Harvard Business School Publishing, explores Leonard's innovation initiatives, including one of VINCI's most ambitious AI projects used in the building of The Link, the iconic new skyscraper in Paris La Defense. The project features SprinkIA, an AI-driven generative design tool that enhances sprinkler system calibration in blueprints. But its development raises important challenges and questions about deploying innovative technologies across a highly decentralized organization. Does VINCI’s decentralization allow the company to quickly deploy innovative new AI technologies across the group? Is the company’s business model optimal for encouraging further innovation? Indeed, how can Leonard promote innovation in such a large firm?

VINCI's decentralized structure, comprising thousands of business units across multiple divisions, aims to encourage entrepreneurial innovation. Initiated in July 2017, Leonard serves as the group's platform for innovation and foresight, facilitating creation and experimentation in day-to-day activities. Bruno Daunay, AI Lead at Leonard, and François Lemaistre, Brand Director of Axians and sponsor of the group’s AI effort, have been working to boost awareness and adoption of new technologies through a carefully structured approach.

Leonard’s six-month framework combines rigorous technical training with practical application. "People were really curious and motivated because they loved their work, but they were sick of doing it inefficiently and wanted to change it," explains Daunay. Leonard has built a robust innovation ecosystem, including partnerships with academic institutions like the Hi! PARIS Center and the establishment of Centers of Excellence. The evolution of DIANE, from a single innovation project to a Center of Excellence for generative design, exemplifies this approach.

Balancing core principles with technological innovation

This innovation strategy aligns with VINCI's commitment to decentralization. As Xavier Huillard, VINCI's CEO, notes, "We are probably the most decentralized company in France. Maybe even in Europe!" His philosophy of "inverting the pyramid" prioritizes operational rather than management levels. This approach has yielded significant results - SprinkIA, for instance, can optimize a sprinkler system blueprint in just 11 minutes, a process that previously took over five days.

Looking ahead, Lemaistre offers an optimistic perspective: "For five years now, we've been explaining what AI is and what it isn't - it's not the robots taking over mankind, it's about improving everyone's jobs. We don't see AI as disruptive in our business model. We think it'll improve it. We want to keep opening doors we've never opened before." As organizations across the globe grapple with AI integration, VINCI's approach through Leonard demonstrates how large companies can foster innovation while maintaining their core organizational principles.

Aluna Wang is Assistant Professor of Accounting at HEC Paris and a chairholder at the Hi! PARIS Center. Her case study, “Building Innovation at VINCI,” is co-written with Harvard Business School Professor Dennis Campbell and HBS Research Associate Ana Carlota Moniz. Learn more in "The AI Odyssey: History, Regulations, and Impact – A European Perspective.” This Hi! PARIS Center's MOOC features Aluna Wang's interviews of Vinci's François Lemaistre and Bruno Daunay, who share their perspectives on the integration of AI in their operations. 
See structure
Part 3

Is AI Threatening the Creative Industries?

Is AI Threatening the Creative Industries?
Artificial Intelligence
Published on:

Pillar 3 of the five pillars of the 3rd AI Action Summit: "Innovation and Culture"

This theme aims to “boost technological excellence in the service of innovation and artistic creation”. In this line of idea, the Summit will entice participants to  discuss the sharing of “value and systems likely to facilitate access to the resources that are critical for the development of AI”. However, “while AI has already made it possible to speed up the creation and distribution of cultural goods, it sometimes calls into question the economic model for creation and the remuneration of intellectual property”. HEC Associate Professor Thomas Paris explores the uneasy relationship between creative industries and Artificial Intelligence.

open book with algorithms

Photo Credits: ra2studio on 123rf

Is the rise of generative AI tools like ChatGPT and Midjourney breaching the last bastion of human exclusivity in economic activities: creation? The exponential use of these tools raises fundamental issues around the quality of art itself, a judicial framework for content producers, and the visibility of cultural creators. 

Yet, we believe flexible regulation could enhance rather than threaten human creativity. We all have in mind the unprecedented five-month strike in 2023 by thousands of Hollywood writers demanding protection against generative AI tools. Their Guild won important concessions concerning accreditation, complementarity and general usage. But, parallel to such conflicts and concerns, generative technological tools continue to be adopted by creative industries for their efficiency and ability to streamline repetitive tasks. In the book industry, for instance, platforms like Genario offer writing assistance, providing narrative structures and trend analyses to guide authors. At the same, tools like Babelio and Gleeph enable better matching of supply and demand through personalized recommendations based on reader preferences. 

AI and the Law

Beyond redefining art and copyright issues, AI's deployment in creative industries raises questions on what appropriate legal framework should govern relationships between content producers and AI operators. Furthermore, what impact could it have on creator visibility and creation quality? AI operators, like digital platforms, derive value from exploiting large volumes of content rather than specific pieces. This creates conflicts around value sharing, as seen between music streaming platforms and rights holders, or search engines and press operators. While fragile agreements are emerging, they highlight the need for a new general framework for these novel relationships.

AI is also increasing content abundance by lowering barriers to entry for creation. Amazon, for example, had to limit authors to uploading only three books … per day! This amplification of available content makes it increasingly costly to achieve the visibility necessary for a new work or author to emerge. This could lead to even greater precariousness for small actors or new, genuine creators, who are often more likely to bring innovation.

The impact of AI on creators and creation is twofold. On the one hand, it provides new tools and intensifies a hyper-competitive context. On the other, it accentuates polarization between a few prominent creators and countless others struggling for visibility. In this environment, phenomena like BookTok on TikTok or personalized algorithmic recommendations could become crucial tools for navigating this abundance, reinforcing dependence on digital prescribers and exacerbating inequalities between creators.

Adaptive Regulation

While AI presents such challenges, it also offers opportunities. In the book industry, AI-generated image banks for book covers and voice synthesis for audiobooks simplify and accelerate production processes. However, again, this ease of production further intensifies competition and the struggle for visibility. 

The key to harnessing both AI's potential and protecting creators lies in adaptive regulation. By establishing a balanced framework, it's possible to foster innovation while respecting creators' rights and ensuring fair compensation for their work. As we grapple with this new landscape, the focus should be on creating an ecosystem where AI enhances rather than replaces human creativity, and where the benefits of technological advancements are equitably distributed across the creative industries.
 

Reference: “Quelle politique pour les industries culturelles à l'ère du numérique ?”, edited by Thomas Paris, Alain Busson, David Piovesan. HEC Paris Associate Professor Thomas Paris is the scientific director for the HEC Master in Media, Art & Creation.
See structure
Part 4

The Cost of Fake News in the Context of Meta’s Shift Away from Fact-Checking

The Cost of Fake News in the Context of Meta’s Shift Away from Fact-Checking
Artificial Intelligence
Published on:
Updated on:

The AI Action Summit’s fourth pillar is “Trust in Artificial Intelligence”. Organizers acknowledge the very real risks linked to the integration of AI systems. Participants will discuss the impact of AI on ethics, the fight against discrimination, “malicious use” of these technologies and security commitments by the historic AI pioneers. These debates will address major challenges of AI in terms information manipulation, challenges which HEC Professor David Restrepo has been exploring for several years. He reflects on the latest one, the decision by facebook founder, Mark Zuckerberg, to remove fact-checkers from Meta. This is an adapted version of a tribune first published by Forbes in January 2025. Restrepo also discusses the role of regulations designed to ensure that humans do not lose control over automated systems and that AI systems will do not diminish humanity and human agency.

someone holding a smartphone

Photo Credits: peopleimages12/123rf

In January 2025, Meta announced a controversial shift in its approach to misinformation, replacing independent fact-checkers on Facebook and Instagram with a Community Notes-style system. As the company framed it, this move is designed to support “more speech and fewer mistakes” by leveraging user contributions to contextualize misleading posts. Such claims reflect those made by X, which implemented similar policies following the takeover by Elon Musk. But our research on his company underlines how the question of speed undermines such policies as falsehoods spread considerably faster than rectifications.

Increasingly, we have seen how quickly fake news can upend financial markets and corporate reputations. In 2023, for example, a fabricated tweet showing a fake explosion near the Pentagon rattled the U.S. stock market, causing a brief but impactful downturn. Then there was the notorious case of Eli Lilly’s fake tweet promising free insulin in November 2022. That cost the pharmaceutical multinational $22 billion in the stock market. This isn’t a new phenomenon - as far back as 2013, a fake report of explosions at the White House caused the S&P 500 to lose $130 billion in market capitalization within minutes.

Research on X’s Community Notes and their Limits

These examples demonstrate that fake news is more than an annoyance – it presents a significant social, economic, political and reputational threat. This is one of several conclusions from our years of research built on a database of around 240,000 notes from X’s (formerly Twitter) Community Notes program. This is a system where users collaboratively provide context to potentially misleading posts. We sought to analyze the causal influence of appending contextual information to potentially misleading posts on their dissemination. While the program offered valuable insights into combating misinformation, our findings reveal critical limitations.

In this study, we found that Community Notes double the probability of a tweet being deleted by its creator. However, as we point out, the note often arrives too late, as around 50% of retweets happen within the first six hours of a tweet’s life. While Community Notes reduce on average retweets by more than 60%, the median note takes over 18 hours to be published - too slow to combat the initial, viral spread of misinformation. This confirms an MIT study in 2018 which showed that falsehoods can go “10 to 20 times faster than facts”. 
It also highlights a critical challenge: while community-driven fact-checking is a valuable tool, its current design and speed are insufficient to mitigate the rapid dissemination of fake news. And the latter is only getting faster.

The Way Forward: Leadership in the Age of Misinformation

Meta’s decision to replace independent fact-checkers with a Community Notes-style system on Instagram and Facebook highlights the urgency of addressing misinformation at scale. Its announcement sparked a wave of criticism, including an open letter to Mark Zuckerberg from the International Fact-Checking Network (IFCN) which warned of the increased risks of misinformation and its consequences for businesses and society. The letter underscored that this approach undermines accountability and could exacerbate the rapid spread of fake news, leaving businesses particularly vulnerable.

As our research demonstrates, these systems need to evolve to match the speed of misinformation’s spread. We believe that integrating AI-driven tools could significantly enhance human efforts, enabling faster detection and flagging of potentially harmful content. For example, machine learning models trained to identify patterns of misinformation can serve as an early warning system, while large language models (LLMs) can complement these efforts. LLMs analyze the linguistic and thematic patterns of viral posts to provide real-time contextualization. This dual approach allows platforms and companies to respond to misinformation more effectively and in near real-time. Moreover, fostering partnerships between social media platforms, governments, and private entities could lead to more unified standards for combating fake news.

Is Regulation Effective in Promote a Human-centric AI?

Parallel to our research on fact-checkers, we have been exploring regulatory initiatives - such as Europe’s AI Act and an AI Liability Directive - which aim to promote a human-centric approach to AI. Our recent study explores a dual approach to tackle excessive regulation and ineffective policy in the field of AI. We suggest that for AI regulation to promote a human-centric approach, human rights should be the main regulatory benchmark for assessing AI systems and balancing the purposes they serve in the market. A first practical step is to require an explicit proportionality test. This test acknowledges that AI systems may impact human rights, such as privacy and non-discrimination, and requires developers to explicitly disclose the trade-offs between the optimisation strategies designed to achieve the business objectives pursued by the AI systems and their potential negative impact on human rights. Moreover, the proportionality would also help to make explicit the trade-offs between human rights themselves, such as in cases where content moderation is performed by algorithms. These algorithms, by determining whether or not to moderate potentially offensive messages, ultimately balance the rights to freedom of expression and non-discrimination. 

Secondly, we suggest a co-evolutionary and life cycle approach which can help ensure accountability beyond the design stage. We propose to achieve this through meaningful human control and human-AI interaction across the entire lifecycle of the system. This allows decision-makers to constantly update and adapt AI systems to answer the challenges they identify during each phase. 

Staying Ahead of the Curve

In today’s fast-moving digital landscape, trust has become as valuable as revenue. The rapid spread of misinformation, amplified by market-driven platforms, presents both a risk and an opportunity for businesses and governments alike. Through research and real-world examples, we see that those who proactively address these challenges can foster both resilience and long-term integrity.

The way forward requires a blend of technological innovation and strategic collaboration. Businesses must integrate AI-driven tools to detect and mitigate misinformation faster than it can spread. However, technology alone is not enough. Leadership is also crucial. By adopting regulatory frameworks and implementing proportionality tests, organizations can ensure that human rights remain central to their AI strategies. This regulatory approach helps make explicit the trade-offs between business objectives and their potential impact on rights such as privacy and non-discrimination. Furthermore, continuous human oversight across the entire AI lifecycle ensures that systems can evolve in response to emerging risks and ethical concerns.

Businesses that stay ahead of the curve by investing in these strategies not only protect their reputations but also contribute to a more informed and resilient society. In doing so, they turn today’s crises into opportunities for innovation and leadership, shaping a future where trust and accountability are the cornerstones of success.

David Restrepo Amariles is HEC Associate Professor of Artificial Intelligence and Law, Hi! PARIS Fellow, and Worldline Chair Professor. Thomas Renault, Assistant Professor of Economics at the University Paris I. Aurore Troussel Clément, lawyer and HEC PhD candidate in AI and law.

Related content on Artificial Intelligence

People discussing code displayed across two monitors in a room with neon lighting at a modern office.

Photo Credits: pitinan on 123rf

Economics

Gaming the Machine: How Strategic Interactions Shape AI Outcomes

By Atulya Jain

crowd demonstration

Photo Credits: halfpoint on 123rf

Governance

AI and Democracy: The Coming Civilization

By Yann Algan, Gilles Babinet

Double exposure of brain drawing over us dollars bill background. Technology concept.

Photo Credits: peshkova on 123rf

Artificial Intelligence

Ensuring Fairness, Interpretability, Frugality, and Stability in Companies’ AI models

By Christophe Pérignon

someone holding a smartphone

Photo Credits: peopleimages12/123rf

Artificial Intelligence

The Cost of Fake News in the Context of Meta’s Shift Away from Fact-Checking

By David Restrepo Amariles

Language concept,flat tiny persons vector illustration.World wide global society population with different cultures, races language. Abstract international communication for business, travel or study.

Photo Credits: normaals/123rf

Artificial Intelligence

Balancing AI Technology with Decentralized Innovation

By Aluna Wang

Artificial Intelligence
Is AI Threatening the Creative Industries?
Thomas Paris
Thomas Paris
CNRS Research Associate Professor
Subscribe button for Knowledhe@HEC newsletter

Newsletter knowledge

A monthly brief in your email box and 3 issues of the book per year.

follow us

Insights @HECParis School of #Management

Follow Us

Support Research

Our articles are produced thanks to our reader's support