London Working Group for Short/Medium Term AI Risks
post by scronkfinkle (lesageethan) · 2025-04-08T17:32:02.457Z · LW · GW · 0 commentsContents
Background & Motivation Approach Suitability & Application None No comments
Background & Motivation
I am a Responsible AI consultant working for a Big 4 consultancy in London. Over the past two years I have become increasingly concerned with organisation's approaches to AI governance, and the lack of country-wide AI regulation in the UK.
As many of you are likely aware, Keir Starmer's approach to AI regulation is to leave it to existing regulators to cover. For a large number of reasons I do not believe this approach is sufficient to mitigate the biggest AI risks that our society faces.
I also believe that there are few 'thought leaders' who are issuing governments with precise, actionable risks and mitigations. Many advisors like to talk in vague terms about AI risks, but will not pin-point specifics.
Approach
My plan is to create a London AI Working Group for Short Term Risks. Over the next few months this team will work together to create a list of AI risks which are likely to manifest in the next three years. This timeline is important, as longer timelines are less likely to be accurate. Also, a three year timeline creates urgency. For each of these risks we will then collaborate to design mitigations.
One rough example from my list is as follows:
Risk | Mitigation(s) |
Photorealistic AI-generated images pose a threat to the UK's legal and democratic institutions, as false evidence may be produced. This may result in false prosecutions, democratic injustice, etc etc. | Possibly: Regulate AI providers to offer a reverse-image-search database of all AI-generated images that have been made on their platform. Alternatively: Enforce steganographic watermarks on AI generated images. Problem: What to do about open-source models? |
We will then collate these into an open letter, which we will address to Keir Starmer, his Government, and anybody else who is open to our ideas. We will each sign and distribute this letter on any public platforms that we have (e.g. LinkedIn, Twitter, EA-related forums). We may also create a government petition in parallel.
Suitability & Application
I want to keep this as open as possible, as I believe the more diverse viewpoints that we hold, the better. However, I understand this may become oversubscribed. Any candidates which meet the following criteria will definitely be invited to be involved:
- Is London-based. Although this is not a requirement, ideally members of this group will be London-based as I believe meeting in-person is beneficial to collaboration.
- Has an AI/AI-safety related job. This is very flexible, anybody who even vaguely meets this criteria will be considered.
- Is available to contribute fairly consistently over the next few months. I would like this project to have a fairly succinct turnaround time, so ideally candidates will have free time to commit to this project.
If you believe you meet this criteria, please comment expressing your interest, and message me on this platform with some basic information about yourself (e.g. location, job title/company, availability). I will try my best to include everyone that I can in this project.
0 comments
Comments sorted by top scores.