Introduction to French AI Policy
post by Lucie Philippon (lucie-philippon) · 2024-07-04T03:39:45.273Z · LW · GW · 12 commentsContents
Generative Artificial Intelligence Committee “AI: Our Ambition for France” The AI Action Summit Organizations working on AI policy and influencing it National AI Safety Institute Think-tanks Leading AI companies in France AI Safety and x-risk reduction focused orgs Conclusion None 12 comments
This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements.
Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered.
At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France.
The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts.
My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I’m confident in the facts, but less in the interpretations, as I’m no policy expert myself.
Generative Artificial Intelligence Committee
The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1]
The goals of the committee were:
- Strengthening AI training programs to develop more AI talent in France
- Investing in AI to promote French innovation on the international stage
- Defining appropriate regulation for different sectors to protect against abuses.
This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member:
Co-chairs:
- Philippe Aghion, an influential French economist specializing in innovation.
- He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies.
- Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit.
- She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power.
Notable members:
- Joëlle Barral, scientific director at Google
- Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon
- Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert
- He is a notable skeptic of catastrophic risks from AI
- Arthur Mensch, founder of Mistral
- He is a notable skeptic of catastrophic risks from AI
- Cédric O, consultant, former Secretary of State for Digital Affairs
- He invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act.
- Martin Tisné, board member of Partnership on AI
- He will lead the “AI for good” track of the next Summit.
See the full list of members in the announcement: Comité de l'intelligence artificielle générative.
“AI: Our Ambition for France”
In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available.
The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute.
This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don’t think about future capabilities of AI models, and are overly dismissive of AI risks.
Some highlights from the report:
- It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of AI to previous overblown fears during the development of electricity and trains.
- It takes a hard pro open-source stance. The report dismisses risks from open-source by saying that models which can increase disinformation are already open-source, so no additional risks in releasing more of them, and that current models don’t increase biorisk, so no need to worry about it.
- It recommends that France lead international AI governance, and advocates for an international AI organization.
- The main fear presented in the report is the fear of lagging behind the US and becoming irrelevant. “It’s a race against time” it says.
The AI Action Summit
In November 2023, the UK organized the inaugural AI Safety Summit. At the end of the Summit, France announced it would host the next one. The date have been confirmed recently: 10-11 February 2025. The main organizer is Anne Bouverot, chair of the Generative Artificial Intelligence Committee mentioned above.
A major update is that the name was changed to “AI Action Summit”, and will now focus on five thematic areas, each led by an "Envoy to the Summit":
- AI for good: Martin Tisné, member of the Generative Artificial Intelligence Committee.
- AI Ecosystem: Roxanne Varza, Director of Station F, the world’s largest startup incubator.
- AI security and safety: Guillaume Poupard, former general director of the French National Agency for Systems Security.
- AI global governance: Henri Verdier, French ambassador for digital affairs since 2018, known for his pro-open-source stance.
- AI impact on the workforce: Sana de Courcelles, Director and Senior Advisor for Special Initiatives at the International Labour Organization.
None of those organizers seem to think AI could pose a catastrophic risk in the coming years, or have even taken stances against concerns about catastrophic risks. This leads me to fear that the Summit might lose a large part of its AI Safety focus if efforts are not made to get safety back in the agenda.
Organizations working on AI policy and influencing it
Various companies, non-profits and governmental agencies influence the direction of AI policy in France. I listed only the most influential and most relevant organizations.
National AI Safety Institute
The French government has decided to create a National Center for AI Evaluation, which will be a joint organization under the public computer science research center Inria, and the French standards lab LNE.[2]
This organization will represent France in the network of safety institutes, which was announced at the Korean AI Safety Summit.
EDIT: Actually, France did not take part in the Korean summit announcement of collaboration between National AI Safety Institutes. However, they announced the creation of the Center for AI Evaluation at Vivatech, which was happening at the same time.
Think-tanks
There are not a lot of think-tanks influencing AI policy in France. The leading one is Institut Montaigne, one of the most influential French think-tank, which has a division working on AI Governance.
The Future Society, a US and Europe based AI governance think-tank, also has some influence in France, but it’s not their priority.
Leading AI companies in France
There are a lot of AI companies popping up in France. I listed below the companies which have or could have an international influence, and who have a large policy influence.
- Mistral AI: Wants to be the OpenAI of Europe, trains and releases open and closed models. They have a lot of impact on policy, and don’t believe in the potential for catastrophic risks of AI.
Mistral was lobbying for the removal of rules on general AI systems from the EU AI act, and have been criticized for their partnership with Microsoft[3].
- LightOn: Develops models for large companies, now focusing on making more agentic models.
- Kyutai: Non profit AI research center, financed by Eric Schmidt, Xavier Niel and Rodolphe Saadé. What they work on is unclear for now, but given their funding source, they could become big.
- Giskard: An AI evaluation startup, focused on removing bias and ensuring compliance
- PRISM Eval: New startup in AI evaluation, focusing on cognitive evaluation.
- Helsing & Preligens: Military AI companies who influence the government’s position on military use of AI.
France is also home to AI research centers of international tech companies
- Google DeepMind. Previously, the Paris location was one of the main offices of Google Brain, before the merge with Deepmind.
- Meta FAIR research lab, directed by Yann LeCun.
- OpenAI opened an office in France, mainly focused on policy.
AI Safety and x-risk reduction focused orgs
France has a small AI Safety community (~20 people), so the only organization working on AI Safety with a strong focus on AI risk reduction is the CeSIA (a new French center for AI safety), who is working on raising awareness of AI risks in both the general public and policy circles, as well as developing technical benchmarks for AI risks. It is an offshoot of EffiSciences, an organization dedicated to impactful research and reducing catastrophic risks.
Conclusion
As said in the intro, the political situation in France is in flux, and the key stakeholders of AI policy may change soon. If the far right party National Rally gets in power, their main AI advisor will probably be Laurent Alexandre, former doctor, transhumanist, and accelerationist. He will probably advocate for more investment, more acceleration, and less focus on safety. There may be changes in the organization of the Summit and its overall direction, but I expect most of the existing stakeholders to stay influential.
Overall, the position of the French government is influenced by actors skeptical of AI risks, who steer both national and international policy towards acceleration and innovation.
Given that those risk skeptical actors also exist in other countries, my theory for why the French government ended up less focused on AI risks than the UK is the lack of prominent actors raising the alarm about the risks. I don’t think that the French Government is impervious to AI safety arguments, I just think that barely anybody has tried presenting the AI Safety side of the debate.
- ^
Generative AI Comittee Announcement: https://www.info.gouv.fr/communique/comite-de-lintelligence-artificielle
- ^
Info on the creation of the AI Evaluation Center: https://www.linkedin.com/posts/milo-rignell-%F0%9F%94%B8-b84064a3_directeur-du-centre-d%C3%A9valuation-de-lia-activity-7196532182218141698-1T1U?utm_source=share&utm_medium=member_desktop
- ^
Mistral x Microsoft deal: https://www.reuters.com/technology/microsofts-deal-with-mistral-ai-faces-eu-scrutiny-2024-02-27/
12 comments
Comments sorted by top scores.
comment by Neil (neil-warren) · 2024-07-07T23:27:18.631Z · LW(p) · GW(p)
Update everyone: the hard right did not end up gaining a parliamentary majority, which, as Lucie mentioned, could have been the worse outcome wrt AI safety.
Looking ahead, it seems that France will end up being fairly confused and gridlocked as it becomes forced to deal with an evenly-split parliament by playing German-style coalition negociation games. Not sure what that means for AI, except that unilateral action is harder.
For reference, I'm an ex-high school student who just got to vote for the first 3 times in his life because of French political turmoil (✨exciting) and am working these days at PauseAI France, a (soon to be official) governance non-profit aiming to, well—
Anyway, as an org we're writing a counter to the AI commitee mentioned in this post, so that's what's up these days in the French AI safety governance circles.
Replies from: neil-warren↑ comment by Neil (neil-warren) · 2024-07-14T12:39:29.789Z · LW(p) · GW(p)
Fun fact: it's thanks to Lucie that I ended up stumbling onto PauseAI in the first place. Small world + thanks Lucie.
comment by Lech Mazur (lechmazur) · 2024-07-06T08:39:26.693Z · LW(p) · GW(p)
Hugging Face should also be mentioned. They're a French-American company. They have a transformers library and they host models and datasets.
Replies from: lucie-philippon↑ comment by Lucie Philippon (lucie-philippon) · 2024-07-06T21:52:03.961Z · LW(p) · GW(p)
The founders of Hugging Face are French yes, but I'm not sure how invested they are in French AI policy. I mostly did not hear about them doing any specific actions or having any specific people with influence there.
comment by Chris_Leong · 2024-07-05T03:22:00.529Z · LW(p) · GW(p)
Do you have any more information on what the National Center for AI Evaluation will be doing?
Replies from: lucie-philippon↑ comment by Lucie Philippon (lucie-philippon) · 2024-07-06T22:54:45.032Z · LW(p) · GW(p)
I don't have any more information on this. DM me if you want me to check whether I can find more info.
comment by Review Bot · 2024-07-11T22:58:48.161Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Dzoldzaya · 2024-07-05T13:39:30.887Z · LW(p) · GW(p)
Thanks for this post, great to have this overview!
I can't put my finger on whether Laurent Alexandre is an accelerationist - I don't know his work too well, but he seems to acknowledge at least some AI-risk arguments.
This is a quote (auto-translated) from his new book:
"The political dystopia described by Harari, predicting that the world of tomorrow would be divided into "gods and useless people," could unfortunately become a social reality.
Regulating a force as monumental as ChatGPT and its successors would require international cooperation. However, the world is at war. Each geopolitical bloc will use the new AIs to manipulate the adversary and develop destructive or manipulative cyber weapons."
Replies from: Augustin Portier↑ comment by TeaTieAndHat (Augustin Portier) · 2024-07-05T17:03:58.770Z · LW(p) · GW(p)
I don’t know Alexandre’s ideas very well, but here’s what I understand: you know how people who don’t like rationalists say they’re just using a veneer of rationality to hide right-wing libertarian beliefs? Well, that’s exactly what Alexandre in fact very openly does, complete with some very embarrassing opinions on the differences in IQ between different parts of the world, that strengthen his position as quite an unsavoury character (the potential reputational harms that would arise as a result of having a caricature of a rationalist be a prominent political actor are left as an exercise to the reader...)
Wikipedia tells me that he likes Bostrom, however, which probably makes him genuinely more aware of AI-related issues than the vast majority of French politicians. However, he also doesn’t expect AGI before 2100, so, until then he’s clearly focused on making sure we can work with AI as much as possible, making sure we can learn to use those superintelligence thingies before they’re strong enough to take our jobs and destroy our democracies, etc… and he’s very insistent that this is an important thing to be doing: if you have shorter timelines than he does (and, like, you do!), then he’s definitely something of an accelerationnist.
↑ comment by TeaTieAndHat (Augustin Portier) · 2024-07-05T18:12:59.520Z · LW(p) · GW(p)
Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of "she says this, Google says AGI will be fine, some other guy says it won’t", and I’m not 100% confident what Alexandre himself believes as far as the details are concerned.
But it seems really obvious that his view is at least something like "AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes". "Enter the Matrix to avoid being swallowed by it", as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such "anti-silicium racism". Which is an oversimplification of, like, so many different things.), but some sentences are more like "humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]". So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that "duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so".
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
Replies from: Dzoldzaya↑ comment by Dzoldzaya · 2024-07-05T23:16:37.878Z · LW(p) · GW(p)
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it's more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
↑ comment by TeaTieAndHat (Augustin Portier) · 2024-07-06T03:46:29.356Z · LW(p) · GW(p)
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.