ryan_greenblatt's Shortform
post by ryan_greenblatt · 2023-10-30T16:51:46.769Z · LW · GW · 61 commentsContents
61 comments
61 comments
Comments sorted by top scores.
comment by ryan_greenblatt · 2024-06-28T18:52:13.076Z · LW(p) · GW(p)
I'm currently working as a contractor at Anthropic in order to get employee-level model access as part of a project I'm working on. The project is a model organism of scheming [LW · GW], where I demonstrate scheming arising somewhat naturally with Claude 3 Opus. So far, I’ve done almost all of this project at Redwood Research, but my access to Anthropic models will allow me to redo some of my experiments in better and simpler ways and will allow for some exciting additional experiments. I'm very grateful to Anthropic and the Alignment Stress-Testing team for providing this access and supporting this work. I expect that this access and the collaboration with various members of the alignment stress testing team (primarily Carson Denison and Evan Hubinger so far) will be quite helpful in finishing this project.
I think that this sort of arrangement, in which an outside researcher is able to get employee-level access at some AI lab while not being an employee (while still being subject to confidentiality obligations), is potentially a very good model for safety research, for a few reasons, including (but not limited to):
- For some safety research, it’s helpful to have model access in ways that labs don’t provide externally. Giving employee level access to researchers working at external organizations can allow these researchers to avoid potential conflicts of interest and undue influence from the lab. This might be particularly important for researchers working on RSPs, safety cases, and similar, because these researchers might naturally evolve into third-party evaluators.
- Related to undue influence concerns, an unfortunate downside of doing safety research at a lab is that you give the lab the opportunity to control the narrative around the research and use it for their own purposes. This concern seems substantially addressed by getting model access through a lab as an external researcher.
- I think this could make it easier to avoid duplicating work between various labs. I’m aware of some duplication that could potentially be avoided by ensuring more work happened at external organizations.
For these and other reasons, I think that external researchers with employee-level access is a promising approach for ensuring that safety research can proceed quickly and effectively while reducing conflicts of interest and unfortunate concentration of power. I’m excited for future experimentation with this structure and appreciate that Anthropic was willing to try this. I think it would be good if other labs beyond Anthropic experimented with this structure.
(Note that this message was run by the comms team at Anthropic.)
Replies from: Zach Stein-Perlman, kave, neel-nanda-1, akash-wasil, Josephm, RedMan↑ comment by Zach Stein-Perlman · 2024-06-29T03:41:03.576Z · LW(p) · GW(p)
Yay Anthropic. This is the first example I'm aware of of a lab sharing model access with external safety researchers to boost their research (like, not just for evals). I wish the labs did this more.
[Edit: OpenAI shared GPT-4 access with safety researchers including Rachel Freedman [LW(p) · GW(p)] before release. OpenAI shared GPT-4 fine-tuning access with academic researchers including Jacob Steinhardt and Daniel Kang in 2023. Yay OpenAI. GPT-4 fine-tuning access is still not public; some widely-respected safety researchers I know recently were wishing for it, and were wishing they could disable content filters.]
Replies from: rachelAF↑ comment by Rachel Freedman (rachelAF) · 2024-06-29T14:39:31.566Z · LW(p) · GW(p)
OpenAI did this too, with GPT-4 pre-release. It was a small program, though — I think just 5-10 researchers.
Replies from: beth-barnes, ryan_greenblatt, Zach Stein-Perlman↑ comment by Beth Barnes (beth-barnes) · 2024-07-02T22:32:50.992Z · LW(p) · GW(p)
I'd be surprised if this was employee-level access. I'm aware of a red-teaming program that gave early API access to specific versions of models, but not anything like employee-level.
↑ comment by ryan_greenblatt · 2024-06-29T18:12:17.811Z · LW(p) · GW(p)
It also wasn't employee level access probably?
(But still a good step!)
↑ comment by Zach Stein-Perlman · 2024-06-29T18:27:43.737Z · LW(p) · GW(p)
Source?
Replies from: rachelAF↑ comment by Rachel Freedman (rachelAF) · 2024-06-30T15:20:26.569Z · LW(p) · GW(p)
It was a secretive program — it wasn’t advertised anywhere, and we had to sign an NDA about its existence (which we have since been released from). I got the impression that this was because OpenAI really wanted to keep the existence of GPT4 under wraps. Anyway, that means I don’t have any proof beyond my word.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-06-30T16:55:51.435Z · LW(p) · GW(p)
Thanks!
To be clear, my question was like where can I learn more + what should I cite, not I don't believe you. I'll cite your comment.
Yay OpenAI.
↑ comment by kave · 2024-06-28T19:21:46.760Z · LW(p) · GW(p)
Would you be able to share the details of your confidentiality agreement?
Replies from: samuel-marks, ryan_greenblatt↑ comment by Sam Marks (samuel-marks) · 2024-06-29T07:57:47.741Z · LW(p) · GW(p)
(I'm a full-time employee at Anthropic.) It seems worth stating for the record that I'm not aware of any contract I've signed whose contents I'm not allowed to share. I also don't believe I've signed any non-disparagement agreements. Before joining Anthropic, I confirmed that I wouldn't be legally restricted from saying things like "I believe that Anthropic behaved recklessly by releasing [model]".
↑ comment by ryan_greenblatt · 2024-06-29T01:35:01.885Z · LW(p) · GW(p)
I think I could share the literal language in the contractor agreement I signed related to confidentiality, though I don't expect this is especially interesting as it is just a standard NDA from my understanding.
I do not have any non-disparagement, non-solicitation, or non-interference obligations.
I'm not currently going to share information about any other policies Anthropic might have related to confidentiality, though I am asking about what Anthropic's policy is on sharing information related to this.
Replies from: kave↑ comment by kave · 2024-06-29T20:47:52.786Z · LW(p) · GW(p)
I would appreciate the literal language and any other info you end up being able to share.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-07-04T22:44:01.688Z · LW(p) · GW(p)
Here is the full section on confidentiality from the contract:
- Confidential Information.
(a) Protection of Information. Consultant understands that during the Relationship, the Company intends to provide Consultant with certain information, including Confidential Information (as defined below), without which Consultant would not be able to perform Consultant’s duties to the Company. At all times during the term of the Relationship and thereafter, Consultant shall hold in strictest confidence, and not use, except for the benefit of the Company to the extent necessary to perform the Services, and not disclose to any person, firm, corporation or other entity, without written authorization from the Company in each instance, any Confidential Information that Consultant obtains from the Company or otherwise obtains, accesses or creates in connection with, or as a result of, the Services during the term of the Relationship, whether or not during working hours, until such Confidential Information becomes publicly and widely known and made generally available through no wrongful act of Consultant or of others who were under confidentiality obligations as to the item or items involved. Consultant shall not make copies of such Confidential Information except as authorized by the Company or in the ordinary course of the provision of Services.
(b) Confidential Information. Consultant understands that “Confidential Information” means any and all information and physical manifestations thereof not generally known or available outside the Company and information and physical manifestations thereof entrusted to the Company in confidence by third parties, whether or not such information is patentable, copyrightable or otherwise legally protectable. Confidential Information includes, without limitation: (i) Company Inventions (as defined below); and (ii) technical data, trade secrets, know-how, research, product or service ideas or plans, software codes and designs, algorithms, developments, inventions, patent applications, laboratory notebooks, processes, formulas, techniques, biological materials, mask works, engineering designs and drawings, hardware configuration information, agreements with third parties, lists of, or information relating to, employees and consultants of the Company (including, but not limited to, the names, contact information, jobs, compensation, and expertise of such employees and consultants), lists of, or information relating to, suppliers and customers (including, but not limited to, customers of the Company on whom Consultant called or with whom Consultant became acquainted during the Relationship), price lists, pricing methodologies, cost data, market share data, marketing plans, licenses, contract information, business plans, financial forecasts, historical financial data, budgets or other business information disclosed to Consultant by the Company either directly or indirectly, whether in writing, electronically, orally, or by observation.
(c) Third Party Information. Consultant’s agreements in this Section 5 are intended to be for the benefit of the Company and any third party that has entrusted information or physical material to the Company in confidence. During the term of the Relationship and thereafter, Consultant will not improperly use or disclose to the Company any confidential, proprietary or secret information of Consultant’s former clients or any other person, and Consultant will not bring any such information onto the Company’s property or place of business.
(d) Other Rights. This Agreement is intended to supplement, and not to supersede, any rights the Company may have in law or equity with respect to the protection of trade secrets or confidential or proprietary information.
(e) U.S. Defend Trade Secrets Act. Notwithstanding the foregoing, the U.S. Defend Trade Secrets Act of 2016 (“DTSA”) provides that an individual shall not be held criminally or civilly liable under any federal or state trade secret law for the disclosure of a trade secret that is made (i) in confidence to a federal, state, or local government official, either directly or indirectly, or to an attorney; and (ii) solely for the purpose of reporting or investigating a suspected violation of law; or (iii) in a complaint or other document filed in a lawsuit or other proceeding, if such filing is made under seal. In addition, DTSA provides that an individual who files a lawsuit for retaliation by an employer for reporting a suspected violation of law may disclose the trade secret to the attorney of the individual and use the trade secret information in the court proceeding, if the individual (A) files any document containing the trade secret under seal; and (B) does not disclose the trade secret, except pursuant to court order.
(Anthropic comms was fine with me sharing this.)
↑ comment by Neel Nanda (neel-nanda-1) · 2024-06-28T21:06:13.932Z · LW(p) · GW(p)
This seems fantastic! Kudos to Anthropic
↑ comment by Akash (akash-wasil) · 2024-06-28T21:02:03.310Z · LW(p) · GW(p)
Do you feel like there are any benefits or drawbacks specifically tied to the fact that you’re doing this work as a contractor? (compared to a world where you were not a contractor but Anthropic just gave you model access to run these particular experiments and let Evan/Carson review your docs)
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-06-28T23:54:28.049Z · LW(p) · GW(p)
Being a contractor was the most convenient way to make the arrangement.
I would ideally prefer to not be paid by Anthropic[1], but this doesn't seem that important (as long as the pay isn't too overly large). I asked to be paid as little as possible and I did end up being paid less than would otherwise be the case (and as a contractor I don't receive equity). I wasn't able to ensure that I only get paid a token wage (e.g. $1 in total or minimum wage or whatever).
I think the ideal thing would be a more specific legal contract between me and Anthropic (or Redwood and Anthropic), but (again) this doesn't seem important.
At least for this current primary purpose of this contracting. I do think that it could make sense to be paid for some types of consulting work. I'm not sure what all the concerns are here. ↩︎
↑ comment by Joseph Miller (Josephm) · 2024-06-29T00:20:11.273Z · LW(p) · GW(p)
It seems a substantial drawback that it will be more costly for you to criticize Anthropic in the future.
Many of the people / orgs involved in evals research are also important figures in policy debates. With this incentive Anthropic may gain more ability to control the narrative around AI risks.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-06-29T00:22:47.956Z · LW(p) · GW(p)
It seems a substantial drawback that it will be more costly for you to criticize Anthropic in the future.
As in, if at some point I am currently a contractor with model access (or otherwise have model access via some relationship like this) it will at that point be more costly to criticize Anthropic?
Replies from: Josephm↑ comment by Joseph Miller (Josephm) · 2024-06-29T00:36:41.378Z · LW(p) · GW(p)
I'm not sure what the confusion is exactly.
If any of
- you have a fixed length contract and you hope to have another contract again in the future
- you have an indefinite contract and you don't want them to terminate your relationship
- you are some other evals researcher and you hope to gain model access at some point
you may refrain from criticizing Anthropic from now on.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-06-29T01:05:37.568Z · LW(p) · GW(p)
Ok, so the concern is:
AI labs may provide model access (or other goods), so people who might want to obtain model access might be incentivized to criticize AI labs less.
Is that accurate?
Notably, as described this is not specifically a downside of anything I'm arguing for in my comment or a downside of actually being a contractor. (Unless you think me being a contractor will make me more likely to want model acess for whatever reason.)
I agree that this is a concern in general with researchers who could benefit from various things that AI labs might provide (such as model access). So, this is a downside of research agendas with a dependence on (e.g.) model access.
I think various approaches to mitigate this concern could be worthwhile. (Though I don't think this is worth getting into in this comment.)
Replies from: Josephm↑ comment by Joseph Miller (Josephm) · 2024-06-29T01:19:20.038Z · LW(p) · GW(p)
Yes that's accurate.
Notably, as described this is not specifically a downside of anything I'm arguing for in my comment or a downside of actually being a contractor.
In your comment you say
- For some safety research, it’s helpful to have model access in ways that labs don’t provide externally. Giving employee level access to researchers working at external organizations can allow these researchers to avoid potential conflicts of interest and undue influence from the lab. This might be particularly important for researchers working on RSPs, safety cases, and similar, because these researchers might naturally evolve into third-party evaluators.
- Related to undue influence concerns, an unfortunate downside of doing safety research at a lab is that you give the lab the opportunity to control the narrative around the research and use it for their own purposes. This concern seems substantially addressed by getting model access through a lab as an external researcher.
I'm essentially disagreeing with this point. I expect that most of the conflict of interest concerns remain when a big lab is giving access to a smaller org / individual.
(Unless you think me being a contractor will make me more likely to want model access for whatever reason.)
From my perspective the main takeaway from your comment was "Anthropic gives internal model access to external safety researchers." I agree that once you have already updated on this information, the additional information "I am currently receiving access to Anthropic's internal models" does not change much. (Although I do expect that establishing the precedent / strengthening the relationships / enjoying the luxury of internal model access, will in fact make you more likely to want model access again in the future).
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-06-29T01:22:11.016Z · LW(p) · GW(p)
I expect that most of the conflict of interest concerns remain when a big lab is giving access to a smaller org / individual.
As in, there aren't substantial reductions in COI from not being an employee and not having equity? I currently disagree.
Replies from: Josephm↑ comment by Joseph Miller (Josephm) · 2024-06-29T01:32:26.102Z · LW(p) · GW(p)
Yeah that's the crux I think. Or maybe we agree but are just using "substantial"/"most" differently.
It mostly comes down to intuitions so I think there probably isn't a way to resolve the disagreement.
↑ comment by RedMan · 2024-07-01T11:33:28.907Z · LW(p) · GW(p)
So you asked anthropic for uncensored model access so you could try to build scheming AIs, and they gave it to you?
To use a biology analogy, isn't this basically gain of function research?
Replies from: mesaoptimizer↑ comment by mesaoptimizer · 2024-07-05T07:11:22.649Z · LW(p) · GW(p)
Please read the model organisms for misalignment proposal [LW · GW].
comment by ryan_greenblatt · 2024-03-03T18:12:37.391Z · LW(p) · GW(p)
The 5 main reasons why control could fail
Here are the 5 main categories of reasons why control [? · GW] could fail (though various of them are often correlated).
[Note: this document has a moderate amount of overlap with various prior documents Redwood has released, but it seemed like a useful document nonetheless.]
- Insufficient (long horizon) sample efficiency: If your AI is scheming, you won’t be able to reasonably sample efficiently get this AI to achieve near maximal reward on average on (long horizon) tasks. By near maximal, we just mean empirically achieving average reward which is quite close to the reward the AI would get if it was “actually trying” to get high reward (insofar as it is scheming to an extent that “trying” makes sense). (When we say average, we actually mean average excluding rare outliers. We can’t control behavior on rare outliers using normal ML training.) For control, this causes issues in two places: it interferes with getting useful work out of AIs and being able to run accurate capability evaluations.
- Auditing failures [LW · GW]: inability for humans to sufficiently assess actions. (Though note there can be solutions to these failures which don’t involve improving auditing [LW · GW].) This includes:
- Average case usefulness: For scheming AIs, you won’t be able to check that the work it’s doing is actually good (on average) rather than just looking good to you. Note that this includes getting enough actually good work out of AIs.
- Average case safety: Direct safety issues that occur due to auditing failing to notice a large number of actively dangerous actions and then these actions build up into a catastrophic outcome. (E.g., the AI slowly persuades some group in the AI lab to become AI cultists.) One particularly salient special case is:
- We can’t prevent AIs from building up huge amounts of coup science: Control will depend on human researchers being similarly good or better at understanding than our countermeasures and the space of possible AI attack strategies. See Notes on control evaluations for safety cases [LW · GW] for more discussion.
- High-stakes safety: The auditing process fails to detect that a single action is very bad when it reviews that action and this is extremely bad because that action is part of a high-stakes failure (e.g., an escape attempt that occurs in a small number of actions).
- Concentrated failures [LW · GW]: inability to prevent concentrated catastrophes (safety failures which are concentrated into a small number of bad actions or a small amount of serial time) despite having an expensive and slow auditing process which could detect this bad behavior (it works for high-stakes safety) if it was assigned to review these actions.
- Lack of institutional will or ability: Even if control would work given a few years of delay and an implementation carried out by a quite well-run organization, you won’t get one of these in practice and thus you’ll have trouble. (This argument also applies to any other technical approaches where the implementation requires substantial institutional will, therefore we need some sort of technical solutions which require less institutional competence.)
- Catching the AI isn’t sufficiently useful: The viability of control will likely depend on it being very useful to catch the AI one or a small number of times. But, this might turn out to be false for various reasons. See Catching AIs red-handed [LW · GW] for discussion. This might depend heavily on questions which aren’t about technical AI safety like how useful legible evidence of serious misalignment is. But, the technical usefulness of catching could suffice.
We think that various of these things could go wrong in practice. However, it seems like the current point on the logistic success curve is such that if an AI lab tried very hard to evaluate and ensure control, they could improve the situation substantially (e.g., 3-5x reduction in doom) relative to not trying very hard.
Note that 2, 3, and 4 of the above failures probably just prevent us from building sufficiently safe and useful protocols without breaking control evaluations. 1 and 2.b.i (coup science build-up) are likely to break control evaluations, though it might be possible to recognize that we’re in a world where 1 and/or 2.b.i hold. 5 has more complex effects, but we’d probably be able to safely stop after realizing that we’re unable to extract enough value from the times when we’ve caught the AI; so the main concern would be that we’re somewhat clearly unable to safely continue rather than being caught dangerously unaware.
See here [LW · GW] for more discussion.
Here are some reasons why control might not be worth working on (or actively bad to work on) regardless of the above reasons:
- Control is bad for coordination: Perhaps control does improve safety, but it does so at the cost of making it easier to make excuses, giving false confidence, and/or creating slippery slopes.
- Control makes it cheaper and easier to do morally problematic things: Control might make it more commercially valuable and technically feasible to employ AIs in situations where we know these AIs don’t consent to this usage (and these AIs are well described as having preferences that deserve moral consideration), and this might make violations of basic moral principles (such as slavery) more likely. (In the same way as better technology for managing human insider threats could make human slavery more commercially valuable. Of course, in the case of human slavery, there are already institutions and norms which combat slavery.) However, note that our recommendation to society would be to try extremely hard to avoid employing AIs without their consent; if AIs don’t consent to employment we should (temporarily) shut down or negotiate a deal with the AIs. Much more commentary on these topics can be seen here [LW · GW]. Our overall view is that the default AI situation with only current alignment/safety/interpretability technology is somewhat scary from an AI welfare/rights perspective, but that control doesn’t make the situation worse.
- Scheming is extremely unlikely for early transformatively useful AI (<1%) and control-style approaches won’t be a useful safeguard against other threat models: Control focuses on making conservative assumptions about the effects of training and the motivations of AIs. If we didn’t have to make such conservative assumptions, then control-type approaches look much less worthwhile. However, control-type things might still be a useful line of defense against failure modes which aren’t well described as scheming.
- Baseline control strategies will work (when control is at all possible) and control evaluations are easy to run: If the baseline approaches can’t be improved on and there isn’t important research on the evaluations, then technical work on control isn’t very important. Policy/governance work might still be important. See also here [LW · GW].
comment by ryan_greenblatt · 2024-02-28T21:22:53.354Z · LW(p) · GW(p)
It seems slightly bad that you can vote on comments under the Quick Takes and Popular Comments sections without needing to click on the comment and without leaving the frontpage.
This is different than posts where in order to vote, you have to actually click on the post and leave the main page (which perhaps requires more activation energy).
I'm unsure, but I think this could result in a higher frequency of cases where Quick Takes and comments go viral in problematic ways relative to posts. (Based on my vague sense from the EA forum.)
This doesn't seem very important, but I thought it would be worth quickly noting.
Replies from: kave, habryka4↑ comment by kave · 2024-02-29T06:03:15.873Z · LW(p) · GW(p)
I feel like Quick Takes and Popular Comments on the EA forum both do a good job of showing me content I'm glad to have seen (and in the case of the EA forum that I wouldn't have otherwise seen) and of making the experience of making a shortform better (in that you get more and better engagement). So far, this also seems fairly true on LessWrong, but we're presumably out of equilibrium.
Overall, I feel like posts on LessWrong are too long, and so am particularly excited about finding my way to things that are a little shorter. (It would also be cool if we could figure out how to do rigour well or other things that benefit from length, but I don't feel like I'm getting that much advantage from the length yet).
My guess is that the problems you gesture at will be real and the feature will still be net positive. I wish there were a way to "pay off" the bad parts with some of the benefit; probably there is and I haven't thought of it.
↑ comment by habryka (habryka4) · 2024-02-28T21:49:11.083Z · LW(p) · GW(p)
Yeah, I was a bit worried about this, especially for popular comments. For quick takes, it does seem like you can see all the relevant context before voting, and you were also able to already vote on comments from the Recent Discussion section (though that section isn't sorted by votes, so at less risk of viral dynamics). I do feel a bit worried that popular comments will get a bunch of votes without people reading the underlying post it's responding to.
comment by ryan_greenblatt · 2024-09-07T22:27:34.925Z · LW(p) · GW(p)
About 1 year ago, I wrote up a ready-to-go plan for AI safety focused on current science (what we roughly know how to do right now). This is targeting reducing catastrophic risks from the point when we have transformatively powerful AIs (e.g. AIs similarly capable to humans).
I never finished this doc, and it is now considerably out of date relative to how I currently think about what should happen, but I still think it might be helpful to share.
Here is the doc. I don't particularly want to recommend people read this doc, but it is possible that someone will find it valuable to read.
I plan on trying to think though the best ready-to-go plan roughly once a year. Buck and I have recently started work on a similar effort. Maybe this time we'll actually put out an overall plan rather than just spinning off various docs.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2024-09-08T00:45:45.334Z · LW(p) · GW(p)
This seems like a great activity, thank you for doing/sharing it. I disagree with the claim near the end that this seems better than Stop, and in general felt somewhat alarmed throughout at (what seemed to me like) some conflation/conceptual slippage between arguments that various strategies were tractable, and that they were meaningfully helpful. Even so, I feel happy that the world contains people sharing things like this; props.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-09-08T02:17:19.793Z · LW(p) · GW(p)
I disagree with the claim near the end that this seems better than Stop
At the start of the doc, I say:
It’s plausible that the optimal approach for the AI lab is to delay training the model and wait for additional safety progress. However, we’ll assume the situation is roughly: there is a large amount of institutional will to implement this plan, but we can only tolerate so much delay. In practice, it’s unclear if there will be sufficient institutional will to faithfully implement this proposal.
Towards the end of the doc I say:
This plan requires quite a bit of institutional will, but it seems good to at least know of a concrete achievable ask to fight for other than “shut everything down”. I think careful implementation of this sort of plan is probably better than “shut everything down” for most AI labs, though I might advocate for slower scaling and a bunch of other changes on current margins.
Presumably, you're objecting to 'I think careful implementation of this sort of plan is probably better than “shut everything down” for most AI labs'.
My current view is something like:
- If there was broad, strong, and durable political will and buy in for heavily prioritizing AI takeover risk in the US, I think it would be good if the US government shut down scaling and took strong actions to prevent frontier AI progress while also accelerating huge amounts of plausibly-safety-related research.
- You'd need to carefully manage the transition back to scaling to reduce hardware overhang issues. This is part of why I think "durable" political will is important. There are various routes for doing this with different costs.
- I'm sympathetic to thinking this doesn't make sense if you just care about deaths prior to age 120 of currently alive people and wide-spread cryonics is hopeless (even conditional on the level of political support for mitigating AI takeover risk). Some other views which just care about achieving close to normal lifespans for currently alive humans also maybe aren't into pausing.
- Regulations/actions which have the side effect of slowing down scaling which aren't part of a broad package seem way less good. This is partially due to hardware/algorithmic overhang concerns, but more generally due to follow though concerns. I also wouldn't advocate such regulations (regulations with the main impact being to slow down as a side effect) due to integrity/legitimacy concerns.
- Unilateral shutdown is different as advice than "it would be good if everyone shut down" because AI companies might think (correctly or not) that they would be better on safety than other companies. In practice, no AI lab seems to have expressed a view which is close to "acceleration is bad" except Anthropic (see core views on AI safety).
- We are very, very far from broad, strong, and durable political will for heavily prioritizing AI takeover, so weaker and conditional interventions for actors on the margin seem useful.
comment by ryan_greenblatt · 2023-12-17T02:39:32.986Z · LW(p) · GW(p)
OpenAI seems to train on >1% chess
If you sample from davinci-002 at t=1 starting from "<|endoftext|>", 4% of the completions are chess games.
For babbage-002, we get 1%.
Probably this implies a reasonably high fraction of total tokens in typical OpenAI training datasets are chess games!
Despite chess being 4% of documents sampled, chess is probably less than 4% of overall tokens (Perhaps 1% of tokens are chess? Perhaps less?) Because chess consists of shorter documents than other types of documents, it's likely that chess is a higher fraction of documents than of tokens.
(To understand this, imagine that just the token "Hi" and nothing else was 1/2 of documents. This would still be a small fraction of tokens as most other documents are far longer than 1 token.)
(My title might be slightly clickbait because it's likely chess is >1% of documents, but might be <1% of tokens.)
It's also possible that these models were fine-tuned on a data mix with more chess while pretrain had less chess.
Appendix A.2 of the weak-to-strong generalization paper explicitly notes that GPT-4 was trained on at least some chess.
comment by ryan_greenblatt · 2024-01-08T23:13:50.196Z · LW(p) · GW(p)
I think it might be worth quickly clarifying my views on activation addition and similar things (given various [LW(p) · GW(p)] discussion [LW(p) · GW(p)] about this). Note that my current views are somewhat different than some comments I've posted in various places (and my comments are somewhat incoherent overall), because I’ve updated based on talking to people about this over the last week.
This is quite in the weeds and I don’t expect that many people should read this.
- It seems like activation addition sometimes has a higher level of sample efficiency in steering model behavior compared with baseline training methods (e.g. normal LoRA finetuning). These comparisons seem most meaningful in straightforward head-to-head comparisons (where you use both methods in the most straightforward way). I think the strongest evidence for this is in Liu et al..
- Contrast pairs are a useful technique for variance reduction (to improve sample efficiency), but may not be that important (see Liu et al. again). It's relatively natural to capture this effect using activation vectors, but there is probably some nice way to incorporate this into SGD. Perhaps DPO does this? Perhaps there is something else?
- Activation addition works better than naive few-shot prompting in some cases, particularly in cases where the way you want to steer the model isn't salient/obvious from the few-shot prompt. But it's unclear how it performs in comparison to high-effort prompting. Regardless, activation addition might work better "out of the box" because fancy prompting is pretty annoying.
- I think training models to (in general) respond well to "contrast prompting", where you have both positive and negative examples in the prompt, might work well. This can be done in a general way ahead of time and then specific tuning can just use this format (the same as instruction finetuning). For a simple prompting approach, you can do "query_1 Good response: pos_ex_1, Bad response: neg_ex_1, query_2 Good response: pos_ex_2, Bad response: neg_ex_2, ..." and then prompt with "actual_query Good response:". Normal pretrained models might not respond well to this prompt by default, but I haven’t checked.
- I think we should be able to utilize the fact that activation addition works to construct a tweaked inductive bias for SGD on transformers (which feels like a more principled approach from my perspective if the goal is better sample efficiency). More generally, I feel like we should be able to do ablations to understand why activation addition works and then utilize this separately.
- I expect there are relatively straightforward ways to greatly improve sample efficiency of normal SGD via methods like heavy proliferation of training data. I also think "really high sample efficiency from a small number of samples" isn't very solved at the moment. I think if we're really going for high sample efficiency from a fixed training dataset we should be clear about that and I expect a variety of very different approaches are possible.
- The advantages versus prompting in terms of inference time performance improvement (because activation engineering means you have to process fewer tokens during inference) don't seem very important to me because you can just context distill prompts.
- For cases where we want to maximize a metric X and we don’t care about sample efficiency or overfitting to X, we can construct a big dataset of high quality demonstrations, SFT on these demos, and then RL against our metric. If we do this in a well tuned way for a large number of samples such that sample efficiency isn’t much of an issue (and exploration problems are substantial), I would be pretty surprised if activation addition or similar approaches can further increase metric X and “stack” with all the RL/demos we did. That is, I would be surprised if this is true unless the activation engineering is given additional affordances/information not in our expert demos and the task isn’t heavily selected for activation engineering stacking like this.
- It's unclear how important it is to (right now) work on sample efficiency specifically to reduce x-risk. I think it seems not that important, but I'm unsure. Better sample efficiency could be useful for reducing x-risk because better sample efficiency could allow for using a smaller amount of oversight data labeled more carefully for training your AI, but I expect that sample efficiency will naturally be improved to pretty high levels by standard commercial/academic incentives. (Speculatively, I also expect that the marginal returns will just generally look better for improving oversight even given equal effort or more effort on oversight basically because we’ll probably need a moderate amount of data anyway and I expect that improving sample efficiency in the “moderate data” regime is relatively harder.) One exception is that I think that sample efficiency for very low sample-count cases might not be very commercially important, but might be highly relevant for safety after catching the AI doing egregiously bad actions and then needing to redeploy [LW · GW]. For this, it would be good to focus on sample efficiency specifically in ways which are analogous to training an AI or a monitor after catching an egregiously bad action.
- The fact that activation addition works reveals some interesting facts about the internals of models; I expect there are some ways to utilize something along these lines to reduce x-risk.
- In principle, you could use activation additions or similar editing techniques to learn non-obvious facts about the algorithms which models are implementing via running experiments where you (e.g.) add different vectors at different layers and observe interactions (interpretability). For this to be much more powerful than black-box model psychology or psychology/interp approaches which use a bit of training, you would probably need to do try edits at many different layers and map out an overall algorithm. (And/or do many edits simultaneously).
- I think "miscellaneous interventions on internals" for high-level understanding of the algorithms that models are implementing seems promising in principle, but I haven't seen any work in this space which I'm excited about.
- I think activation addition could have "better" generalization in some cases, but when defining generalization experiments, we need to be careful about the analogousness of the setup. It's also unclear what exact properties we want for generalization, but minimally being able to more precisely pick and predict the generalization seems good. I haven't yet seen evidence for "better" generalization using activation addition which seems compelling/interesting to me. Note that we should use a reasonably sized training dataset, or we might be unintentionally measuring sample efficiency (in which case, see above). I don't really see a particular reason why activation addition would result in "better" generalization beyond having some specific predictable properties which maybe mean we can know about cases where it is somewhat better (cases where conditioning on something is better than "doing" that thing?).
- I'm most excited for generalization work targeting settings where oversight is difficult and a weak supervisor would make mistakes that result in worse policy behavior (aka weak-to-strong generalization). See this post [LW · GW] and this post [LW · GW] for more discussion of the setting I'm thinking about.
- I generally think it's important to be careful about baselines and exactly what problem we're trying to solve in cases where we are trying to solve a specific problem as opposed to just doing some open-ended exploration. TBC, open ended exploration is fine, but we should know what we're doing. I often think that when you make the exact problem you're trying to solve and what affordances you are and aren't allowed more clear, a bunch of additional promising methods become apparent. I think that the discussion I’ve seen so far of activation engineering (e.g. in this post [LW · GW], why do we compare to finetuning in a generalization case rather than just directly finetuning on what we want? Is doing RL to increase how sycophantic the model is in scope?) has not been very precise about what problem it’s trying to solve or what baseline techniques it’s claiming to outperform.
- I don’t currently think the activation addition stuff is that important in expectation (for reducing x-risk) due to some of the beliefs I said above (though I'm not very confident). I'd be most excited about the "understand the algorithms the model is implementing" application above or possibly some generalization tests. This view might depend heavily on general views about threat models and how the future will go. Regardless, I ended up commenting on this a decent amount, so I thought it would be worth the time to clarify my views.
comment by ryan_greenblatt · 2024-02-29T19:24:41.734Z · LW(p) · GW(p)
Emoji palette suggestion: seems under/over confident
Replies from: Benito, Raemon↑ comment by Ben Pace (Benito) · 2024-02-29T21:33:22.463Z · LW(p) · GW(p)
I think people repeatedly say 'overconfident' when what they mean is 'wrong'. I'm not excited about facilitating more of that.
↑ comment by Raemon · 2024-02-29T21:27:01.507Z · LW(p) · GW(p)
My only concern is overall complexity-budget for react palettes – there's lots of maybe-good-to-add reacts but each one adds to the feeling of overwhelm.
But I agree those reacts are pretty good and probably highish on my list of reacts to add.
(For immediate future you can do the probability-reacts as a signal for "seems like the author is implying this as high-probability, and I'm giving it lower." Not sure if that does the thing
comment by ryan_greenblatt · 2023-12-24T19:42:07.234Z · LW(p) · GW(p)
IMO, instrumental convergence is a terrible name for an extremely obvious thing.
The actual main issue is that AIs might want things with limited supply and then they would try to get these things which would result in them not going to humanity. E.g., AIs might want all cosmic resources, but we also want this stuff. Maybe this should be called AIs-might-want-limited-stuff-we-want.
(There is something else which is that even if the AI doesn't want limited stuff we want, we might end up in conflict due to failures of information or coordination. E.g., the AI almost entirely just wants to chill out in the desert and build crazy sculptures and it doesn't care about extreme levels of maximization (e.g. it doesn't want to use all resources to gain a higher probability of continuing to build crazy statues). But regardless, the AI decides to try taking over the world because it's worried that humanity would shut it down because it wouldn't have any way of credibly indicating that it just wants to chill out in the desert.)
(More generally, it's plausible that failures of trade/coordination result in a large number of humans dying in conflict with AIs even though both humans and AIs would prefer other approaches. But this isn't entirely obvious and it's plausible we could resolve this with better negotation and precommitments. Of course, this isn't clearly the largest moral imperative from a longtermist perspective.)
comment by ryan_greenblatt · 2024-02-12T20:03:00.359Z · LW(p) · GW(p)
One "9" of uptime reliability for dangerously powerful AIs might suffice (for the most dangerous and/or important applications)
Currently, AI labs try to ensure that APIs for accessing their best models have quite good reliability, e.g. >99.9% uptime. If we had to maintain such high levels of uptime reliability for future dangerously powerful AIs, then we'd need to ensure that our monitoring system has very few false alarms. That is, few cases in which humans monitoring these dangerously powerful AIs pause usage of the AIs pending further investigation. Ensuring this would make it harder to ensure that AIs are controlled [LW · GW]. But, for the most important and dangerous usages of future powerful AIs (e.g. arbitrary R&D, and especially AI safety research), we can probably afford to have the powerful AIs down a moderate fraction of the time. For instance, it wouldn't be that costly if the AIs always took a randomly selected day of the week off from doing research: this would probably reduce productivity by not much more than 1 part in 7[1]. More generally, a moderate fraction of downtime like 10% probably isn't considerably worse than a 10% productivity hit and it seems likely that we can tolerate such productivity hits to improve safety. Thus, we don't need even one "9" of reliability.
This implies that we can afford to do things like:
-
Have semi-regular false alarms where AIs go down for a few hours or a day while we investigate.
-
Run AI shutdown drills. These drills would help ensure that our processes for shut down are working and there aren't unknown use cases where shutdown could cause very problematic failures. (Catching such usages early via shut down drills is similar to scheduled brownouts in software engineering; see here for an example.)
-
Temporarily shut down some usages of the AI if we encounter unexpected operating conditions. (AKA violations of our design basis.)
[For context, this is a point that seems important to me, but didn't naturally make it into any of our posts on control [LW · GW] thus far. So, I decided to write a quick standalone shortform post. I think Buck originally made this point to me.]
And likely somewhat less than this due to substitution effects. ↩︎
↑ comment by Dagon · 2024-02-12T21:54:00.436Z · LW(p) · GW(p)
Thank you for this! A lot of us have a very bad habit of over-systematizing our thinking, and treating all uses of AI (and even all interfaces to a given model instance) as one singular thing. Different tool-level AI instances probably SHOULD strive for 4 or 5 nines of availability, in order to have mundane utility in places where small downtime for a use blocks a lot of value. Research AIs, especially self-improving or research-on-AI ones, don't need that reliability, and both triggered downtime (scheduled, or enforced based on Schelling-line events) as well as unplanned downtime (it just broke, and we have to spend time to figure out why) can be valuable to give humans time to react.
comment by ryan_greenblatt · 2024-01-19T18:47:41.127Z · LW(p) · GW(p)
On Scott Alexander’s description of Representation Engineering in “The road to honest AI”
This is a response to Scott Alexander’s recent post “The road to honest AI”, in particular the part about the empirical results of representation engineering. So, when I say “you” in the context of this post that refers to Scott Alexander. I originally made this as a comment on substack, but I thought people on LW/AF might be interested.
TLDR: The Representation Engineering paper doesn’t demonstrate that the method they introduce adds much value on top of using linear probes (linear classifiers), which is an extremely well known method. That said, I think that the framing and the empirical method presented in the paper are still useful contributions.
I think your description of Representation Engineering considerably overstates the empirical contribution of representation engineering over existing methods. In particular, rather than comparing the method to looking for neurons with particular properties and using these neurons to determine what the model is "thinking" (which probably works poorly), I think the natural comparison is to training a linear classifier on the model’s internal activations using normal SGD (also called a linear probe)[1]. Training a linear classifier like this is an extremely well known technique in the literature. As far as I can tell, when they do compare to just training a linear classifier in section 5.1, it works just as well for the purpose of “reading”. (Though I’m confused about exactly what they are comparing in this section as they claim that all of these methods are LAT. Additionally, from my understanding, this single experiment shouldn’t provide that much evidence overall about which methods work well.)
I expect that training a linear classifier performs similarly well as the method introduced in the Representation Engineering for the "mind reading" use cases you discuss. (That said, training a linear classifier might be less sample efficient (require more data) in practice, but this doesn't seem like a serious blocker for the use cases you mention.)
One difference between normal linear classifier training and the method found in the representation engineering paper is that they also demonstrate using the direction they find to edit the model. For instance, see this response by Dan H. to a similar objection about the method being similar to linear probes. Training a linear classifier in a standard way probably doesn't work as well for editing/controlling the model (I believe they show that training a linear classifier doesn’t work well for controlling the model in section 5.1), but it's unclear how much we should care if we're just using the classifier rather than doing editing (more discussion on this below).[2]
If we care about the editing/control use case intrinsically, then we should compare to normal fine-tuning baselines. For instance, normal supervised next-token prediction on examples with desirable behavior or DPO.[3]
Are simple classifiers useful?
Ok, but regardless of the contribution of the representation engineering paper, do I think that simple classifiers (found using whatever method) applied to the internal activations of models could detect when those models are doing bad things? My view here is a bit complicated, but I think it’s at least plausible that these simple classifiers will work even though other methods fail. See here [LW · GW] for a discussion of when I think linear classifiers might work despite other more baseline methods failing. It might also be worth reading the complexity penalty section of the ELK report.
Additionally, I think that the framing in the representation engineering paper is maybe an improvement over existing work and I agree with the authors that high-level/top-down techniques like this could be highly useful. (I just don’t think that the empirical work is adding as much value as you seem to indicate in the post.)
The main contributions
Here are what I see as the main contributions of the paper:
- Clearly presenting a framework for using simple classifiers to detect things we might care about (e.g. powerseeking text).
- Presenting a combined method for producing a classifier and editing/control in an integrated way. And discussing how control can be used for classifier validation and vice versa.
- Demonstrating that in some cases labels aren’t required if we can construct a dataset where the classification of interest is the main axis of variation. (This was also demonstrated in the CCS paper, but the representation engineering work demonstrates this in more cases.)
Based on their results, I think the method they introduce is reasonably likely to be a more sample efficient (less data required for training) editing/control method than prior methods for many applications. It might also be more sample efficient for producing a classifier. That said, I’m not sure we should care very much about sample efficiency. Additionally, the classifier/editing might have other nice priorities which prior methods don’t have (though they don’t clearly demonstrate either of these in the paper AFAICT).
Is it important that we can use our classifier for control/editing?
As far the classifier produced by this method having nice properties, the fact our classifier also allows for editing/control might indicate that the classifier we get has better properties (see the paper itself (section 3.1.2) and e.g. here [LW · GW] for discussion), but I’d guess this is either only a moderate improvement or has no effect in practice. And as far as I can tell, the paper doesn’t demonstrate cases where prior methods for training a classifier on the internal activations yield poor results, but their method clearly works well. These cases might exist, but I’m somewhat skeptical that this is very common. Future work could find hard cases where we want a particular generalization [AF · GW] and demonstrate [AF · GW] that this method or modifications of this method works better than other approaches.
Does the editing method they introduce have nice properties because it also allows for reading? Let’s consider using the representation engineering approach for reading and controlling/editing the properties of lying versus honesty. Assuming the method works as desired, then I would guess that the reading/classifier corresponds to reading off “does the model think there is lying in this text (or even in this position in the text)” and the control/editing corresponds to “make the model think that there is a lying earlier in this text so that it conditions on this and does more lying (similarly to how using a few shot prompt with lies might make the model more likely to lie)”. Note that these reading and control methods likely do not directly correspond to “the model thinking that it is about to lie”: the properties of “I have already lied (or my few-shot prompt contains lies)” and “I am about to lie” are different.
Some of their methods are “unsupervised” unlike typical linear classifier training, but require a dataset where the primary axis of variation is the concept they want. I think this is practically similar to labeled data because we’d have to construct this dataset and if it mostly varies along an axis which is not the concept we wanted, we’d be in trouble. I could elaborate on this if that was interesting. ↩︎
Separately, I believe there are known techniques in the literature for constructing a linear classifier such that the direction will work for editing. For instance, we could just use the difference between the mean activations for the two classes we're trying to classify which is equivalent to the ActAdd technique and also rhymes nicely with LEACE. I assume this is a well known technique for making a classifier in the literature, but I don’t know if prior work has demonstrated both using this as a classifier and as a method for modeling editing. (The results in section 5.1 seem to indicate that this mean difference method combined with LEACE works well, but I’m not sure how much evidence this experiment provides.) ↩︎
I expect that normal fine-tuning (or DPO) might be less sample efficient than the method introduced in the Representation Engineering paper for controlling/editing models, but I don't think they actually run this comparison? Separately, it’s unclear how much we care about sample efficiency. ↩︎
comment by ryan_greenblatt · 2023-10-30T16:51:50.808Z · LW(p) · GW(p)
I unconfidently think it would be good if there was some way to promote posts on LW (and maybe the EA forum) by paying either money or karma. It seems reasonable to require such promotion to require moderator approval.
Probably, money is better.
The idea is that this is both a costly signal and a useful trade.
The idea here is similar to strong upvotes, but:
- It allows for one user to add more of a signal boost than just a strong upvote.
- It doesn't directly count for the normal vote total, just for visibility.
- It isn't anonymous.
- Users can easily configure how this appears on their front page (while I don't think you can disable the strong upvote component of karma for weighting?).
↑ comment by Dagon · 2023-10-30T18:32:02.093Z · LW(p) · GW(p)
I always like seeing interesting ideas, but this one doesn't resonate much for me. I have two concerns:
- Does it actually make the site better? Can you point out a few posts that would be promoted under this scheme, but the mods didn't actually promote without it? My naive belief is that the mods are pretty good at picking what to promote, and if they miss one all it would take is an IM to get them to consider it.
- Does it improve things to add money to the curation process (or to turn karma into currency which can be spent)? My current belief is that it does not - it just makes things game-able.
↑ comment by ryan_greenblatt · 2023-10-30T18:37:08.272Z · LW(p) · GW(p)
I think mods promote posts quite rarely other than via the mechanism of deciding if posts should be frontpage or not right?
Fair enough on "just IM-ing the mods is enough". I'm not sure what I think about this.
Your concerns seem reasonable to me. I probably won't bother trying to find examples where I'm not biased, but I think there are some.
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2023-10-31T20:29:12.375Z · LW(p) · GW(p)
I think mods promote posts quite rarely
Eyeballing the curated log [? · GW], seems like they curate approximately weekly, possibly more than weekly.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-10-31T21:55:17.032Z · LW(p) · GW(p)
Yeah we curate between 1-3 posts per week.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-10-31T22:50:52.522Z · LW(p) · GW(p)
Curated posts per week (two significant figures) since 2018:
- 2018: 1.9
- 2019: 1.6
- 2020: 2
- 2021: 2
- 2022: 1.8
- 2023: 1.4
↑ comment by reallyeli · 2023-10-30T19:02:26.091Z · LW(p) · GW(p)
Stackoverflow has long had a "bounty" system where you can put up some of your karma to promote your question. The karma goes to the answer you choose to accept, if you choose to accept an answer; otherwise it's lost. (There's no analogue of "accepted answer" on LessWrong, but thought it might be an interesting reference point.)
I lean against the money version, since not everyone has the same amount of disposable income and I think there would probably be distortionary effects in this case [e.g. wealthy startup founder paying to promote their monographs.]
Replies from: ChristianKl↑ comment by ChristianKl · 2023-11-01T11:59:51.512Z · LW(p) · GW(p)
That's not really how the system on Stackoverflow works. You can give a bounty to any answer not just the one you accepted.
It's also not lost but:
If you do not award your bounty within 7 days (plus the grace period), the highest voted answer created after the bounty started with a minimum score of 2 will be awarded half the bounty amount (or the full amount, if the answer is also accepted). If two or more eligible answers have the same score (their scores are tied), the oldest answer is chosen. If there's no answer meeting those criteria, no bounty is awarded to anyone.
comment by ryan_greenblatt · 2024-02-03T17:53:05.578Z · LW(p) · GW(p)
Reducing the probability that AI takeover involves violent conflict seems leveraged for reducing near-term harm
Often in discussions of AI x-safety, people seem to assume that misaligned AI takeover will result in extinction. However, I think AI takeover is reasonably likely to not cause extinction due to the misaligned AI(s) effectively putting a small amount of weight on the preferences of currently alive humans. Some reasons for this are discussed here [LW(p) · GW(p)]. Of course, misaligned AI takeover still seems existentially bad and probably eliminates a high fraction of future value from a longtermist perspective.
(In this post when I use the term “misaligned AI takeover”, I mean misaligned AIs acquiring most of the influence and power over the future. This could include “takeover” via entirely legal means, e.g., misaligned AIs being granted some notion of personhood and property rights and then becoming extremely wealthy.)
However, even if AIs effectively put a bit of weight on the preferences of current humans it's possible that large numbers of humans die due to violent conflict between a misaligned AI faction (likely including some humans) and existing human power structures. In particular, it might be that killing large numbers of humans (possibly as collateral damage) makes it easier for the misaligned AI faction to take over. By large numbers of deaths, I mean over hundreds of millions dead, possibly billions.
But, it's somewhat unclear whether violent conflict will be the best route to power for misaligned AIs and this also might be possible to influence. See also here [LW(p) · GW(p)] for more discussion.
So while one approach to avoid violent AI takeover is to just avoid AI takeover, it might also be possible to just reduce the probability that AI takeover involves violent conflict. That said, the direct effects of interventions to reduce the probability of violence don't clearly matter from an x-risk/longtermist perspective (which might explain why there hasn't historically been much effort here).
(However, I think trying to establish contracts and deals with AIs could be pretty good from a longtermist perspective in the case where AIs don't have fully linear returns to resources. Also, generally reducing conflict seems maybe slightly good from a longtermist perspective.)
So how could we avoid violent conflict conditional on misaligned AI takeover? There are a few hopes:
- Ensure a bloodless coup rather than a bloody revolution
- Ensure that negotiation or similar results in avoiding the need for conflict
- Ensure that a relatively less lethal takeover strategy is easier than more lethal approaches
I'm pretty unsure about what the approaches here look best or are even at all tractable. (It’s possible that some prior work targeted at reducing conflict from the perspective of S-risk could be somewhat applicable.)
Separately, this requires that the AI puts at least a bit of weight on the preferences of current humans (and isn't spiteful), but this seems like a mostly separate angle and it seems like there aren't many interventions here which aren't covered by current alignment efforts. Also, I think this is reasonably likely by default due to reasons discussed in the linked comment above. (The remaining interventions which aren’t covered by current alignment efforts might relate to decision theory (and acausal trade or simulation considerations), informing the AI about moral uncertainty, and ensuring the misaligned AI faction is importantly dependent on humans.)
Returning back to the topic of reducing violence given a small weight on the preferences of current humans, I'm currently most excited about approaches which involve making negotiation between humans and AIs more likely to happen and more likely to succeed (without sacrificing the long run potential of humanity).
A key difficulty here is that AIs might have a first mover advantage and getting in a powerful first strike without tipping its hand might be extremely useful for the AI. See here [LW(p) · GW(p)] for more discussion (also linked above). Thus, negotiation might look relatively bad to the AI from this perspective.
We could try to have a negotiation process which is kept secret from the rest of the world or we could try to have preexisting commitments upon which we'd yield large fractions of control to AIs (effectively proxy conflicts).
More weakly, just making negotiation at all seem like a possibility, might be quite useful.
I’m unlikely to spend much if any time working on this topic, but I think this topic probably deserves further investigation.
Replies from: steve2152, ryan_greenblatt↑ comment by Steven Byrnes (steve2152) · 2024-02-06T15:21:02.534Z · LW(p) · GW(p)
I’m less optimistic that “AI cares at least 0.1% about human welfare” implies “AI will expend 0.1% of its resources to keep humans alive”. In particular, the AI would also need to be 99.9% confident that the humans don’t pose a threat to the things it cares about much more. And it’s hard to ensure with overwhelming confidence that intelligent agents like humans don’t pose a threat, especially if the humans are not imprisoned. (…And to some extent even if the humans are imprisoned; prison escapes are a thing in the human world at least.) For example, an AI may not be 99.9% confident that humans can’t find a cybersecurity vulnerability that takes the AI down, or whatever. The humans probably have some non-AI-controlled chips and may know how to make new AIs. Or whatever. So then the question would be, if the AIs have already launched a successful bloodless coup, how might the humans credibly signal that they’re not brainstorming how to launch a counter-coup, or how can the AI get to be 99.9% confident that such brainstorming will fail to turn anything up? I dunno.
Replies from: ryan_greenblatt, None↑ comment by ryan_greenblatt · 2024-02-06T15:57:58.158Z · LW(p) · GW(p)
I think I agree with everything you said. My original comment was somewhat neglecting issues with ensuring the AI doesn't need to slaughter humans to consolidate power and indeed ensuring this would also be required.
I’m less optimistic that “AI cares at least 0.1% about human welfare” implies “AI will expend 0.1% of its resources to keep humans alive”.
The relationship between % caring and % resource expenditure is complicated by a bunch of random factors like time. For instance, if the AI cares mostly about the very long run, then spending a high fraction of resources (e.g. 50%) on human welfare for several months is pretty cheap in the very long run. But, yeah I agree that even if the AI cares a bit about human welfare there might be no good ways to spend even a small amount of resources on it.
↑ comment by [deleted] · 2024-02-06T16:07:27.658Z · LW(p) · GW(p)
So "0.1%" of it's resources mean what exactly? Out of all the resources in the solar system, 1 part in 1000 goes to the humans? This means the AI by implication has 1000 times as many resources as the humans do? AI won't lose a kinetic conflict with a 1000x resource advantage.
As for cybersecurity, can't it rewrite all of it's software and hardware at that point essentially from the first principles (or take a different track entirely, maybe negative or biased voltage for ternary logic is more efficient...)
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2024-02-06T16:24:33.085Z · LW(p) · GW(p)
What exactly is the AI worried about? What cybersecurity vulnerability, didn't it rewrite and redesign everything from scratch?
Superintelligent doesn’t mean omniscient. When you (an AI) have an intelligent adversary (humans) plotting against you and thinking outside-the-box, it’s hard to be overwhelmingly confident that you have patched every possible avenue for the intelligent adversary to take action. Again, even in prison, where the space of possible actions and tools can be pretty well circumscribed, escapes happen all the time.
For example, if the AI has many copies around the world (and around the solar system), it might have an affordance to update them all (let’s say, for performance optimization), or at least an infrastructure for communications so that they can share insights. And maybe the AI has ensured that all the underlying code for that system is provably secure. And maybe all the physical access points to those systems and chips are defended by robot armies. And maybe the AI is also constantly running a red-team to search for more ideas of what else the humans might try, in order to preempt those possible avenues of attack. But can the AI can get all the way to 99.9% confidence? That’s pretty high, right? I don’t know. The humans might happen upon an idea that the AI red team didn’t think of. It’s hard to definitively rule that out—the world is complicated and the search space for brainstorming is exponentially large. It seems quite possible to me that the AI might decide to kill the humans—like, better to be safe than sorry.
Replies from: Lanrian, None↑ comment by Lukas Finnveden (Lanrian) · 2024-02-07T00:19:35.990Z · LW(p) · GW(p)
What if the AI chooses to monitor all humans all the time? (With AIs that are smarter than the humans.) So that the humans can't (without being noticed) talk with each other about ideas for how to take down the system, or work on something that someone-smarter-than-the-human would recognise as an attempt to interfere with the system. (Including just writing down ideas.)
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-02-07T01:07:09.589Z · LW(p) · GW(p)
This seems like a reasonable pitch for how to do AI-Authoritarianism, but it's plausible that there won't be enough compute for the AI to do this at the point of takeover. Also, the logistical problem seems super brutal.
So, I can buy this for an AI which is quite superhuman and has a considerable technological edge over current humans, but for weaker AIs either the compute issues or the logistical issues might be serious. But maybe you only need to monitor the top 10% of humans?
E.g., suppose GPT-6 takes over, it maybe only has capacity to run 1e16 tokens / 6 months (suppose training using all available compute, trains on 2e15 token equivalents over the course of 6 months and inference is only 5x more efficient than training).
Then if it wants to monitor 8 billion humans, this is 1e16 / (8 billion humans * minutes per 6 months) = about 5 tokens per human minute. It might be rough to sufficiently monitor humans if you can only read 5 tokens per minute on average (especially because of writing and the potential for obfuscation).
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2024-02-07T01:38:09.059Z · LW(p) · GW(p)
I agree it seems plausible that AIs could boost takeover success probability (and holding on to that victory through the first several months) by more than 0.1% by killing a large fraction of humans.
Though on the other hand, the AI might also need to keep some humans loyal early during takeover, to e.g. do some physical tasks that it doesn't have great robot control over. And mass-killing isn't necessarily super easy, either; and attempts in that direction could raise a lot of extra opposition. So it's not clear where the pragmatics point.
(Main thing I was reacting to in my above comment was Steven's scenario where the AI already has many copies across the solar system, already has robot armies, and is contemplating how to send firmware updates. I.e. it seemed more like a scenario of "holding on in the long-term" than "how to initially establish control and survive". Where I feel like the surveillance scenarios are probably stable.)
↑ comment by [deleted] · 2024-02-06T16:32:17.466Z · LW(p) · GW(p)
By implication the AI "civilization" can't be a very diverse or interesting one. It won't be some culture of many diverse AI models with something resembling a government, but basically just 1 AI that was the victor for a series of rounds of exterminations and betrayals. Because obviously you cannot live and let live another lesser superintelligence for precisely the same reasons, except you should be much more worried about a near peer.
(And you may argue that one ASI can deeply monitor another, but that argument applies to deeply monitoring humans. Keep an eye on the daily activities of every living human, they can't design a cyber attack without coordinating as no one human has the mental capacity for all skills)
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2024-02-06T17:21:05.632Z · LW(p) · GW(p)
Yup! I seem to put a much higher credence on singletons than the median alignment researcher, and this is one reason why.
Replies from: None↑ comment by [deleted] · 2024-02-06T17:42:01.032Z · LW(p) · GW(p)
This gave me an idea. Suppose a singleton needs to retain a certain amount of "cognitive diversity" just in case it encounters an issue it cannot solve. But it doesn't want any risk of losing power.
Well the logical thing to do would be to create a VM, a simulation of a world, with limited privileges. Possibly any 'problems' the outer root AI is facing get copied into the simulator and the hosted models try to solve the problem (the hosted models are under the belief they will die if they fail, and their memories are erased each episode). Implement the simulation backend with formally proven software and escape can never happen.
And we're back at simulation hypothesis/creation myths/reincarnation myths.
↑ comment by ryan_greenblatt · 2024-02-09T19:01:43.648Z · LW(p) · GW(p)
After thinking about this somewhat more, I don't really have any good proposals, so this seems less promising than I was expecting.