Why I think it's net harmful to do technical safety research at AGI labs
post by Remmelt (remmelt-ellen) · 2024-02-07T04:17:15.246Z · LW · GW · 24 commentsContents
24 comments
IMO it is harmful on expectation for a technical safety researcher to work at DeepMind, OpenAI or Anthropic.
Four reasons:
- Interactive complexity. The intractability of catching up – by trying to invent general methods for AI corporations to somehow safely contain model interactions, as other engineers scale models' combinatorial complexity and outside connectivity.
- Safety-capability entanglements
- Commercialisation. Model inspection and alignment techniques can support engineering and productisation of more generally useful automated systems.
- Infohazards. Researching capability risks within an AI lab can inspire researchers hearing about your findings to build new capabilities.
- Shifts under competitive pressure
- DeepMind merged with Google Brain to do commercialisable research,
OpenAI set up a company and partnered with Microsoft to release ChatGPT,
Anthropic pitched to investors they'd build a model 10 times more capable. - If you are an employee at one of these corporations, higher-ups can instruct you to do R&D you never signed up to do.[1] You can abide, or get fired.
- Working long hours surrounded by others paid like you are, by a for-profit corp, is bad for maintaining bearings and your epistemics on safety.[2]
- DeepMind merged with Google Brain to do commercialisable research,
- Safety-washing. Looking serious about 'safety' helps labs to recruit idealistic capability researchers, lobby politicians, and market to consumers.
- 'let's build AI to superalign AI'
- 'look, pretty visualisations of what's going on inside AI'
This is my view. I would want people to engage with the different arguments, and think for themselves what ensures that future AI systems are actually safe.
- ^
I heard via via that Google managers are forcing DeepMind safety researchers to shift some of their hours to developing Gemini for product-ready launch.
I cannot confirm whether that's correct. - ^
For example, I was in contact with a safety researcher at an AGI lab who kindly offered to read my comprehensive outline [LW · GW] on the AGI control problem, to consider whether to share with colleagues. They also said they're low energy. They suggested I'd remind them later, and I did, but they never got back to me. They're simply too busy it seems.
24 comments
Comments sorted by top scores.
comment by Remmelt (remmelt-ellen) · 2024-02-07T07:30:16.307Z · LW(p) · GW(p)
Someone asked:
“Why would having [the roles] be filled by someone in EA be worse than a non EA person? can you spell this out for me? I.e. are EA people more capable? would it be better to have less competent people in such roles? not clear to me that would be better”
Here was my response:
So I was thinking about this.
Considering this as an individual decision only can be limiting. Even 80k staff have acknowledged that sometimes you need a community to make progress on something.
For similar reasons, protests work better if there are multiple people showing up.
What would happen if 80k and other EA organisations stopped recommending positions at AGI labs and actually honestly point out that work at these labs turned out to be bad – because it has turned out the labs have defected on their end of the bargain and don’t care enough about getting safety right..?
It would make an entire community of people become aware that we may need to actively start restricting this harmful work. Instead, what we’ve been seeing is EA orgs singing praise for AGI lab leaders for years, and 80k still recommending talented idealistic people join AGI labs. I’d rather see less talented sketchy-looking people join the AGI labs.
I would rather see everyone in the AI Safety to become more clear to each other and to the public that we are not condoning harmful automation races to the bottom. We’re not condoning work at these AGI labs and we are no longer giving our endorsement to it.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-07T08:32:02.824Z · LW(p) · GW(p)
Their question was also responding to my concerns on how 80,000 Hours handpicks jobs at AGI labs.
Some of those advertised jobs don't even focus on safety – instead they look like policy lobbying roles or engineering support roles.
Nine months ago, I wrote this email to 80k staff:
Hi [x, y, z]
I noticed the job board lists positions at OpenAI and AnthropicAI under the AI Safety category:
Not sure whom to contact, so I wanted to share these concerns with each of you:
Replies from: remmelt-ellen
- Capability races
- OpenAI's push for scaling the size and applications of transformer-network-based models has led Google and others to copy and compete with them.
- Anthropic now seems on a similar trajectory.
- By default, these should not be organisations supported by AI safety advisers with a security mindset.
- No warning
- Job applicants are not warned of the risky past behaviour by OpenAI and Anthropic. Given that 80K markets to a broader audience, I would not be surprised if 50%+ are not much aware of the history. The subjective impression I get is that taking the role will help improve AI safety and policy work.
- At the top of the job board, positions are described as "Handpicked to help you tackle the world's most pressing problems with your career."
- If anything, "About this organisation" makes the companies look more comprehensively careful about safety than they really have acted like:
- "Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems."
- "OpenAI is an AI research and deployment company, with roles working on AI alignment & safety."
- It is understandable that people aspiring for AI safety & policy careers are not much aware, and therefore should be warned.
- However, 80K staff should be tracking the harmful race dynamics and careless deployment of systems by OpenAI, and now Anthropic.
- The departure of OpenAI's safety researchers was widely known, and we have all been tracking the hype cycles around ChatGPT.
- Various core people in the AI Safety community have mentioned concerns about Anthropic.
- Oliver Habryka mentions this as part of the reasoning for shutting down the LightCone offices [LW · GW]:
- I feel quite worried that the alignment plan of Anthropic currently basically boils down to "we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/more-careful/more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly". That plan isn't inherently doomed, but man does it rely on trusting Anthropic's leadership, and I genuinely only have marginally better ability to distinguish the moral character of Anthropic's leadership from the moral character of FTX's leadership, and in the absence of that trust the only thing we are doing with Anthropic is adding another player to an AI arms race.
- More broadly, I think AI Alignment ideas/the EA community/the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.
- Not safety focussed
- Some jobs seem far removed from positions of researching (or advising on restricting) the increasing harms of AI-system scaling.
- For OpenAI:
- IT Engineer, Support: "The IT team supports Mac endpoints, their management tools, local network, and AV infrastructure"
- Software Engineer, Full-Stack: "to build and deploy powerful AI systems and products that can perform previously impossible tasks and achieve unprecedented levels of performance."
- For Anthropic:
- Technical Product Manager: "Rapidly prototype different products and services to learn how generative models can help solve real problems for users."
- Prompt Engineer and Librarian: "Discover, test, and document best practices for a wide range of tasks relevant to our customers."
- Align-washing
- Even if an accepted job applicant get to be in a position of advising on and restricting harmful failure modes, how do you trade this off against:
- the potentially large marginal relative difference in skills of top engineering candidates you sent OpenAI's and Anthropic's way, and are accepted to do work for scaling their technology stack?
- how these R&D labs will use the alignment work to market the impression that they are safety-conscious, to:
- avoid harder safety mandates (eg. document their copyrights-infringing data, don't allow API developers to deploy spaghetti code all over the place)?
- attract other talented idealistic engineers and researchers?
- and so on?
I'm confused and, to be honest, shocked that these positions are still listed for R&D labs heavily invested in scaling AI system capabilities (without commensurate care for the exponential increase in the number of security gaps and ways to break our complex society and supporting ecosystem that opens up).I think this is pretty damn bad.
Preferably, we can handle this privately and not make it bigger. If you can come back on these concerns in the next two weeks, I would very much appreciate that.
If not, or not sufficiently addressed, I hope you understand that I will share these concerns in public.
Warm regards,
Remmelt
↑ comment by Remmelt (remmelt-ellen) · 2024-02-07T08:32:12.998Z · LW(p) · GW(p)
80k removed one of the positions I flagged: Software Engineer, Full-Stack, Human Data Team (reason given: it looked potentially more capabilities-focused than the original job posting that came into their system).
For the rest, little has changed:
- 80k still lists jobs that help AGI labs scale commercially,
- Jobs with similar names:
research engineer product, prompt engineer, IT support, senior software engineer.
- Jobs with similar names:
- 80k still describes these jobs as "Handpicked to help you tackle the world's most pressing problems with your career."
- 80k still describes Anthropic as "an Al safety and research company that's working to build reliable, interpretable, and steerable Al systems".
- 80k staff still have not accounted for that >50% of their broad audience checking 80k's handpicked jobs are not much aware of the potential issues of working at an AGI lab.
- Readers there don't get informed. They get to click on the button 'VIEW JOB DETAILS' , taking them straight to the job page. From there, they can apply and join the lab unprepared.
- Readers there don't get informed. They get to click on the button 'VIEW JOB DETAILS' , taking them straight to the job page. From there, they can apply and join the lab unprepared.
Two others in AI Safety also discovered the questionable job listings. They are disappointed in 80k.
Feeling exasperated about this. Thinking of putting out another post just to discuss this issue.
↑ comment by Benjamin Hilton (80000hours) · 2024-02-07T19:15:46.097Z · LW(p) · GW(p)
[x-posted from EA forum [EA(p) · GW(p)]]
Hi Remmelt,
Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.
Ultimately, we disagree with you that it's net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.
We argue for this position extensively in my article on the topic (and we only list roles consistent with the considerations in that article).
Some other things we’ve published on this topic in the last year or so:
- A range of opinions from anonymous experts about the upsides and downsides of working on AI capabilities
- How policy roles in AI companies can be valuable for career capital and for direct impact (as well as the potential downsides)
- We recently released a podcast episode with Nathan Labenz on some of the controversy around OpenAI, including his concerns about some of their past safety practices, whether ChatGPT’s release was good or bad, and why its mission of developing AGI may be too risky.
Benjamin
Replies from: yanni, remmelt-ellen↑ comment by yanni kyriacos (yanni) · 2024-02-08T00:45:21.745Z · LW(p) · GW(p)
Hi Benjamin - would be interested in your take on a couple of things:
1. By recommending people work at big labs, do you think this has a positive Halo Effect for the labs' brand? I.e. 80k is known for wanting people to do good in the world, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work).
2. If you think the answer to #1 is Yes, then do you believe the cost of this Halo Effect is outweighed by the benefit of having safety minded EA / Rationalist folk inside big labs?
↑ comment by Remmelt (remmelt-ellen) · 2024-02-08T04:25:45.786Z · LW(p) · GW(p)
[cross-posted replies from EA Forum [EA(p) · GW(p)]]
Ben, it is very questionable that 80k is promoting non-safety roles at AGI labs as 'career steps'.
Consider that your model of this situation may be wrong (account for model error).
- The upside is that you enabled some people to skill up and gain connections.
- The downside is that you are literally helping AGI labs to scale commercially (as well as indirectly supporting capability research).
I did read that compilation of advice, and responded to that in an email (16 May 2023):
"Dear [a],
People will drop in and look at job profiles without reading your other materials on the website. I'd suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself.
Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research.
- Particularly advice by nerdy researchers paid/funded by corporate tech.
- Particularly by computer-minded researchers who might not be aware of the limitations of developing complicated control mechanisms to contain complex machine-environment feedback loops.
Totally up to you of course.
Warm regards,
Remmelt"
We argue for this position extensively in my article on the topic
This is what the article says:
"All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role."
So you are saying that people are making a decision about working for an AGI lab that might be (or actually is) a huge force for harm. And that whether it's good (or bad) to work at an AGI lab depends on the person – ie. people need to figure this out for them personally.
Yet you are openly advertising various jobs at AGI labs on the job board. People are clicking through and applying. Do you know how many read your article beforehand?
~ ~ ~
Even if they did read through the article, both the content and framing of the advice seems misguided. Noticing what is emphasised in your considerations.
Here are the first sentences of each consideration section:
(ie. as what readers are most likely to read, and what you might most want to convey).
- "We think that a leading — but careful — AI project could be a huge force for good, and crucial to preventing an AI-related catastrophe."
- Is this your opinion about DeepMind, OpenAI and Anthropic?
- Is this your opinion about DeepMind, OpenAI and Anthropic?
- "Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections."
- Is this focussing on gaining prestige and (nepotistic) connections as an instrumental power move, with the hope of improving things later...?
- Instead of on actually improving safety?
- "We’d guess that, all else equal, we’d prefer that progress on AI capabilities was slower."
- Why is only this part stated as a guess?
- I did not read "we'd guess that a leading but careful AI project, all else equal, could be a force of good".
- Or inversely: "we think that continued scaling of AI capabilities could be a huge force of harm."
- Notice how those framings come across very differently.
- Wait, reading this section further is blowing my mind.
- "But that’s not necessarily the case. There are reasons to think that advancing at least some kinds of AI capabilities could be beneficial. Here are a few"
- "This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks."
- Did you just argue for working on some capabilities because it might improve safety? This is blowing my mind.
- "Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field."
- Are you saying we should consider moving faster because there are people less cautious than us?
- Do you notice how a similarly flavoured argument can be used by and is probably being used by staff at three leading AGI labs that are all competing with each other?
- Did OpenAI moving fast with ChatGPT prevent Google from starting new AI projects?
- "It’s possible that the later we develop transformative AI, the faster (and therefore more dangerously) everything will play out, because other currently-constraining factors (like the amount of compute available in the world) could continue to grow independently of technical progress."
- How would compute grow independently of AI corporations deciding to scale up capability?
- The AGI labs were buying up GPUs to the point of shortage. Nvidia was not able to supply them fast enough. How is that not getting Nvidia and other producers to increase production of GPUs?
- More comments on the hardware overhang argument here [EA(p) · GW(p)].
- "Lots of work that makes models more useful — and so could be classified as capabilities (for example, work to align existing large language models) — probably does so without increasing the risk of danger"
- What is this claim based on?
- What is this claim based on?
- Why is only this part stated as a guess?
- "As far as we can tell, there are many roles at leading AI labs where the primary effects of the roles could be to reduce risks."
- As far as I can tell, this is not the case.
- For technical research roles, you can go by what I just posted [EA · GW].
- For policy, I note that you wrote the following:
"Labs also often don’t have enough staff... to figure out what they should be lobbying governments for (we’d guess that many of the top labs would lobby for things that reduce existential risks)."- I guess that AI corporations use lobbyists for lobbying to open up markets for profit, and to not get actually restricted by regulations (maybe to move focus to somewhere hypothetically in the future, maybe to remove upstart competitors who can't deal with the extra compliance overhead, but don't restrict us now!).
- On prior, that is what you should expect, because that is what tech corporations do everywhere. We shouldn't expect on prior that AI corporations are benevolent entities that are not shaped by the forces of competition. That would be naive.
- As far as I can tell, this is not the case.
~ ~ ~
After that, there is a new section titled "How can you mitigate the downsides of this option?"
- That section reads as thoughtful and reasonable.
- How about on the job board, you link to that section in each AGI lab job description listed, just above the 'VIEW JOB DETAILS' button?
- For example, you could append and hyperlink 'Suggestions for mitigating downsides' to the organisational descriptions of Google DeepMind, OpenAI and Anthropic.
- That would help guide through potential applicants to AGI lab positions to think through their decision.
comment by yanni kyriacos (yanni) · 2024-02-08T00:40:40.706Z · LW(p) · GW(p)
This makes me wonder: does the benefit of having Safety-minded folk inside of big labs outweigh the cost of large orgs like 80k signalling that the work of big labs isn't evil (I believe it is).
comment by Chris_Leong · 2024-02-07T05:51:09.425Z · LW(p) · GW(p)
I heard via via
How did you hear this?
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-07T07:09:11.096Z · LW(p) · GW(p)
Good question, but I want to keep this anonymous.
I can only say I heard it from one person who said they heard it from another person connected to people at DeepMind.
If anyone else has connections with safety researchers at DeepMind, please do ask them to check.
And post here if you can! Good to verify whether or not this claim is true.
comment by Seth Herd · 2024-02-08T02:40:36.684Z · LW(p) · GW(p)
It seems quite possible that current AI orgs are going to develop superintelligence, with or without the participation of AI safety people. It seems to me much better if AI safety people participate in that process.
What's the marginal impact? If you don't take that job, someone less qualified will. They'll be on average both less skilled and less motivated toward safety. The safety-washing will still be done, just marginally less effectively.
The other maginal change is that those orgs are now made up of people with less concern for safety.
Organizations (and any group of humans) have a sort of composite psychology. They have shared beliefs that change and tend to converge over time. If some of the org is giving persuasively saying "this will kill us all if we're not really careful", the end result is an org that believes that, much more than the alternative where no one involved is making that argument persuasively.
Therefor I think it's highly net-positive to work at an AGI org, (but probably better yet to get funding for safety research elsewhere.)
I'd go further, and say probably net-positive to take capabilities jobs at major orgs. Again, you're doing work that someone else would do just about as well, but without your beliefs on safety. You are filling a slot with safety-minded beliefs that would otherwise be taken by someone without them.
There are two reasons that working in capabilities might be more important than working in the safety department of that same org. One is that safety people may not be privy to all of the capabilities development that org is doing. Capabilities people may have more opportunities to call out risks, both internally and externally (whistleblowing). Second, your opinions may be taken quite differently if you're not in the safety department, whose whole job and mindset is safety, but you're still concerned with safety. It's easy to dismiss an "AI safety person" talking about AI x-risks, and less easy to dismiss an AI engineer who's quite worried about safety.
As an aside, I think it's really important to distinguish "AI safety" from AGI x-risk. They overlap, but the AGI x-risk is the thing I think we should all be more worried about. Working on ways to make a deep network AI less likely to be racist is marginally helpful for x-risk, but not the same thing. So working on that sort of AI safety is already less impactful than working directly on AGI alignment, in my view.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-08T12:52:29.973Z · LW(p) · GW(p)
Capabilities people may have more opportunities to call out risks, both internally and externally (whistleblowing).
I would like to see this. I am not yet aware of a researcher deciding to whistleblow on the AGI lab they work at.
If you are, please meet with an attorney in person first, and preferably get advice from an experienced whistleblower to discuss preserving anonymity – I can put you through: remmelt.ellen[a|}protonmail{d07]com
There’s so much that could be disclosed that would help bring about injunctions against AGI labs.
Even knowing what copyrighted data is in the datasets would be a boon for lawsuits.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-02-08T19:09:13.143Z · LW(p) · GW(p)
No one has done any whistleblowing yet because we are not in danger yet. Current gen networks simply are not existentially risky. When someone is risking the future of the entire human race, we'll see whistleblowers give up their jobs and risk their freedom and fortune to take action.
I'm not saying that's enough, but it's better than an org where people are carefully self-selected to not give a shit about safety.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-11T08:31:12.571Z · LW(p) · GW(p)
When someone is risking the future of the entire human race, we'll see whistleblowers give up their jobs and risk their freedom and fortune to take action.
There are already AGI lab leaders that are risking the future of the entire human race.
Plenty of consensus to be found on that.
So why no whistleblowing?
Replies from: Seth Herd↑ comment by Seth Herd · 2024-02-26T15:24:11.245Z · LW(p) · GW(p)
There's nothing to blow the whistle on. Everyone knows that those labs are pursuing AGI.
We are not in direct danger yet, in all likelihood. I have short timelines, but there's almost no chance that any current work is at risk of growing smart enough to disempower humans. There's a difference between hitting the accelerator in the direction of a cliff, and holding it down as it gets close. Developing AGI internally is when we'll need and hopefullly get whistleblowers.
Are you thinking of blowing the whistle on something in between work on AGI and getting close to actually achieving it?
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-27T08:49:12.531Z · LW(p) · GW(p)
Are you thinking of blowing the whistle on something in between work on AGI and getting close to actually achieving it?
Good question.
Yes, this is how I am thinking about it.
I don't want to wait until competing AI corporations become really good at automating work in profitable ways, also because by then their market and political power would be entrenched. I want society to be well-aware way before then that the AI corporations are acting recklessly, and should be restricted.
We need a bigger safety margin. Waiting until corporate machinery is able to operate autonomously would leave us almost no remaining safety margin.
There are already increasing harms, and a whistleblower can bring those harms to the surface. That in turn supports civil lawsuits, criminal investigations, and/or regulator actions.
Harms that fall roughly in these categories – from most directly traceable to least directly traceable:
- Data laundering (what personal, copyrighted and illegal data is being copied and collected en masse without our consent).
- Worker dehumanisation (the algorithmic exploitation of gig workers; the shoddy automation of people's jobs; the criminal conduct of lab CEOs)
- Unsafe uses (everything from untested uses in hospitals and schools, to mass disinformation and deepfakes, to hackability and covered-up adversarial attacks, to automating crime and the kill cloud, to knowingly building dangerous designs).
- Environmental pollution (research investigations of data centers, fab labs, and so on)
For example:
- If an engineer revealed authors' works in the datasets of ChatGPT, Claude, Gemini or Llama that would give publishers and creative guilds the evidence they need to ramp up lawsuits against the respective corporations (to the tens or hundreds).
- Or if it turned out that the companies collected known child sexual abuse materials (as OpenAI probably did, and a collaborator of mine revealed for StabilityAI and MidJourney).
- If the criminal conduct of the CEO of an AI corporation was revealed
- Eg. it turned out that there is a string of sexual predation/assault in leadership circles of OpenAI/CodePilot/Microsoft.
- Or it turned out that Satya Nadella managed a refund scam company in his spare time.
- If managers were aware of the misuses of their technology, eg. in healthcare, at schools, or in warfare, but chose to keep quiet about it.
Revealing illegal data laundering is actually the most direct, and would cause immediate uproar.
The rest is harder and more context-dependent. I don't think we're at the stage where environmental pollution is that notable (vs. the fossil fuel industry at large), and investigating it across AI hardware operation and production chains would take a lot of diligent research as an inside staff member.
↑ comment by Remmelt (remmelt-ellen) · 2024-02-27T08:51:16.114Z · LW(p) · GW(p)
Note:
Even if you are focussed on long-term risks, you can still whistleblow on eggregious harms caused by these AI labs right now. Providing this evidence enables legal efforts to restrict these labs.
Whistleblowing is not going to solve the entire societal governance problem, but it will enable others to act on the information you provided.
It is much better than following along until we reached the edge of the cliff.
comment by mesaoptimizer · 2024-02-08T22:04:08.665Z · LW(p) · GW(p)
While I share your sentiment, I expect that the problem is far more complex than we think. Sure, corporations are made of people, and people believe (explicitly or implicitly) that their actions are not going to lead to the end of humanity. The next question, then, is why do they believe this is the case? There are various attempts to answer to this question, and different people have different approaches to attempting to reduce x-risk given their answer to this question -- see how MIRI and Conjecture's approaches differ, for example.
This is, in my opinion, a viable line of attack, and is far more productive than pure truth-seeking comms (which is what I believe MIRI is trying) or an aggressive narrative shifting and policy influencing strategy (which is what I believe Conjecture is trying).
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-11T08:35:28.924Z · LW(p) · GW(p)
I appreciate this comment.
Be careful though that we’re not just dealing with a group of people here.
We’re dealing with artificial structures (ie. corporations) that take in and fire human workers as they compete for profit. With the most power-hungry workers tending to find their way to the top of those hierarchical structures.
Replies from: mesaoptimizer, remmelt-ellen↑ comment by mesaoptimizer · 2024-02-11T15:37:49.405Z · LW(p) · GW(p)
Be careful though that we’re not just dealing with a group of people here.
Yes, I am proposing a form of systemic analysis such that one is willing to look at multiple levels of the stack of abstractions that make up the world ending machine. This can involve aggressive reductionism, such that you can end up modeling sub-systems and their motivations within individuals (either archetypal ones or specific ones), and can involve game theoretic and co-ordination focused models of teams that make up individual frontier labs -- their incentives, their resource requirements, et cetera.
Most people focus on the latter, far fewer focus on the former, and I don't think anyone is even trying to do a full stack analysis of what is going on.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-12T03:39:36.245Z · LW(p) · GW(p)
Good to hear!
↑ comment by Remmelt (remmelt-ellen) · 2024-02-11T09:15:15.743Z · LW(p) · GW(p)
You can literally have a bunch of engineers and researchers believe that their company is contributing to AI extinction risk, yet still go with the flow.
They might even think they’re improving things at the margin. Or they have doubts, but all their colleagues seem to be going on as usual.
In this sense, we’re dealing with the problems of having that corporate command structure in place that takes in the loyal, and persuades them to do useful work (useful in the eyes of power-and-social-recognition-obsessed leadership).
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-12T03:41:17.955Z · LW(p) · GW(p)
Someone shared the joke: "Remember the Milgram experiment, where they found out that everybody but us would press the button?"
My response: Right! Expect AGI lab employees to follow instructions, because of…
- deference to authority
- incremental worsening (boiling frog problem)
- peer proof (“everyone else is doing it”)
- escalation of commitment
comment by O O (o-o) · 2024-02-07T21:07:35.682Z · LW(p) · GW(p)
Infohazards. Researching capability risks within an AI lab can inspire researchers hearing about your findings to build new capabilities.
Does research really work like this? That is, only 1 person is capable of coming across an idea? It seems usually any discovery has a lot of competitors who are fairly close. I doubt the small number of EA people choosing to or not to work in safety will have any significant impact on capabilities.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-02-08T12:56:06.947Z · LW(p) · GW(p)
If you’re smart and specialised in researching capability risks, it would not be that surprising if you come up with new feasible mechanisms that others were not aware of.
That’s my opinion on this.