Posts

A Model-based Approach to AI Existential Risk 2023-08-25T10:32:16.817Z
Ten Levels of AI Alignment Difficulty 2023-07-03T20:20:21.403Z
When is intent alignment sufficient or necessary to reduce AGI conflict? 2022-09-14T19:39:11.920Z
When would AGIs engage in conflict? 2022-09-14T19:38:22.478Z
When does technical work to reduce AGI conflict make a difference?: Introduction 2022-09-14T19:38:00.760Z
Uncontrollable Super-Powerful Explosives 2022-04-02T20:13:13.431Z
Modeling Failure Modes of High-Level Machine Intelligence 2021-12-06T13:54:38.147Z
Book Review: Churchill and Orwell 2021-10-09T19:01:17.615Z
Takeoff Speeds and Discontinuities 2021-09-30T13:50:35.046Z
Investigating AI Takeover Scenarios 2021-09-17T18:47:22.270Z
Distinguishing AI takeover scenarios 2021-09-08T16:19:40.602Z
Analogies and General Priors on Intelligence 2021-08-20T21:03:18.882Z
SDM's Shortform 2020-07-23T14:53:52.568Z
Modelling Continuous Progress 2020-06-23T18:06:47.474Z
Coronavirus as a test-run for X-risks 2020-06-13T21:00:13.859Z
Will AI undergo discontinuous progress? 2020-02-21T22:16:59.424Z
The Value Definition Problem 2019-11-18T19:56:43.271Z

Comments

Comment by Sammy Martin (SDM) on Paul Christiano named as US AI Safety Institute Head of AI Safety · 2024-04-17T10:37:34.982Z · LW · GW

He's the best person they could have gotten on the technical side but Paul's strategic thinking has been consistently clear eyed and realistic but also constructive, see for example this: www.alignmentforum.org/posts/fRSj2W4Fjje8rQWm9/thoughts-on-sharing-information-about-language-model

So to the extent that he'll have influence on general policy as well this seems great!

Comment by Sammy Martin (SDM) on On coincidences and Bayesian reasoning, as applied to the origins of COVID-19 · 2024-02-19T15:45:38.053Z · LW · GW

This whole thing reminds me of Scott Alexander's Pyramid essay. That seems like a really good case where it seems like there's a natural statistical reference class, seems like you can easily get a giant Bayes factor that's "statistically well justified", and to all the counterarguments you can say "well the likelihood is 1 in 10^5 that the pyramids would have a latitude that matches to the speed of light in m/s". That's a good reductio for taking even fairly well justified sounding subjective bayes factors at face value.

And I think that it's built into your criticism that because the problem is social and hidden evidence filtering going on, there will also tend to be an explanation on the meta-level too for why my coincidence finding is different from your coincidence finding.

Comment by Sammy Martin (SDM) on SDM's Shortform · 2024-01-18T11:27:38.889Z · LW · GW

Tom Davidson’s report: https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit?usp=drivesdk

My old 2020 post: https://www.lesswrong.com/posts/66FKFkWAugS8diydF/modelling-continuous-progress

In my analysis of Tom Davidson's "Takeoff Speeds Report," I found that the dynamics of AI capability improvement, as discussed in the context of a software-only singularity, align closely with the original simplified equation ( I′(t) = cI + f(I)I^2 ) from my 4 year old post on Modeling continuous progress. Essentially, that post describes how we switch from exponential to hyperbolic growth as the fraction of AI research done by AIs improves along a logistic curve. These are all features of the far more complex mathematical model in tom’s report.

In this equation, ( I ) represents the intelligence or capability of the AI system. It correlates to the cognitive output or the efficiency of the AI as described in the report, where the focus is on the software improvements contributing to the overall effectiveness of AI systems. The term ( cI ) in the equation can be likened to the constant external effort or input in improving AI systems, which is consistent with the ongoing research and development efforts mentioned in the report. This part of the equation represents the incremental improvements in AI capabilities due to human-led development efforts.

The second term in the equation, ( f(I)I^2 ), is particularly significant in understanding the relationship with the software-only singularity concept. Here, ( f(I) ) is a function that determines the extent to which the AI system can use its intelligence to improve itself, essentially a measure of recursive self-improvement (RSI). In the report, the discussion about a software-only singularity uses a similar concept, where the AI systems reach a point where their self-improvement significantly accelerates their capability growth. This is analogous to ( f(I) ) increasing, leading to a more substantial contribution of ( I^2 ) (the AI’s self-improvement efforts) to the overall rate of intelligence growth, ( I′(t) ). As the AI systems become more capable, they contribute more to their own development, a dynamic that both the equation and the report capture. The report has a ‘FLOP gap' from when AIs start to contribute to research at all to when they fully take over which is essentially the upper and lower bounds to fit the f(I) curve to. Otherwise, the overall rate of change is sharper in tom’s report as I ignored increasing investment and increasing compute in my model focussing only on software self improvement feedback loops.

One other thing I liked about Tom’s report is it's focus on relatively outside viewy bio anchors and epoch AIs direct approach estimates for what is needed for TAI.

Maybe this is an unreasonable demand, but one concern I have about all of these alleged attempts to measure the ability of an AI to automate scientific research, is that this feels like a situation where it's unusually slippery and unusually easy to devise a metric that doesn't actually capture what's needed to dramatically accelerate research and development. Ideally, I'd like a metric where we know, as a matter of necessity, that a very high score means that the system would be able to considerably speed up research.

For example, the direct approach estimation does have this property, where if you can replicate to a certain level of accuracy what a human expert would say over a certain horizon length, you do in some sense have to be able to match or replicate the underlying thinking that produced it, which means being able to do long horizon tasks, but of course, that's a very vague upper bound. It's not perfect, the Horizon Length metric might only cover the 90th percentile of tasks at each time scale. The remaining 10th percentile might contain harder, more important tasks necessary for AI progress

I think trying to anticipate and list in a task all the capabilities you think you need to automate scientific progress when we don't really know what those are will lead to a predictable underestimate of what's required.

Comment by Sammy Martin (SDM) on AI #47: Meet the New Year · 2024-01-16T16:20:22.490Z · LW · GW

I thought it was worth commenting here because to me the 3 way debate with Eliezer Yudkowsky, Nora Belrose, and Andrew Critch managed to collectively touch on just about everything that I think the common debate gets wrong about AI “doom” with the result that they’re all overconfident in their respective positions.

Starting with Eliezer and Nora’s argument. Her statement:

"Alien shoggoths are about as likely to arise in neural networks as Boltzmann brains are to emerge from a thermal equilibrium.”

To which Eliezer responds,

"How blind to 'try imagining literally any internal mechanism that isn't the exact thing you hope for' do you have to be -- to think that, if you erase a brain, and then train that brain solely to predict the next word spoken by nice people, it ends up nice internally?"

I agree that it’s a mistake to identify niceness with predicting nice behaviour, and I agree that Nora is overconfident in no generalisation failures as a result of making a similar mistake. If your model says it’s literally as unlikely as a boltzmann brain appearing from nowhere then something has gone wrong. But, I don’t think that her point is as straightforward as just conflating a nice internal mechanism with nice feedback. I'm going to try and explain what I think her argument is.

I think that Eliezer has an implicit model that there’s zillions of potential generalizations to predict niceness which a model could learn, that all are pretty much equally likely to get learned a priori, and actually being nice is just one of them so it’s basically impossible for RLHF to hit on it, so RLHF would require tremendous cosmic coincidences to work.

Maybe this is true in some sense for arbitrarily superintelligent AI. But, as Paul Christiano said, I think that this tells us not much about what to expect for “somewhat superhuman” AI. Which is what we care about for predicting whether we’ll see misalignment disasters in practice. 

Rather, “actually learning to be nice” is how humans usually learn to predict nice behaviour. Of all the possible ways that generalisation from nice training could happen, this is privileged as a hypothesis somewhat, it stands out from the background haze of random mechanisms that could be learned.

If the reasons this strategy worked for humans are transferable to the LLM case (and that is highly arguable and unclear), then yes, it might be true that giving agents rewards for being nice causes them to internally develop a sort of pseudo-niceness representation that controls their behaviour and planning even up to superhuman levels, even out of distribution. It’s not for ‘literally no reason’ or 'by coincidence' or 'because of a map-territory conflation', but because its possible such a mechanism in the form of a model inductive bias really exists and we have some vague evidence in favor of it.

Okay, so what’s the internal mechanism that I’m imagining which gets us there? Here’s a sketch, based on an “easy world” outlined in my alignment difficulty post.

Suppose that (up to some level of competence that’s notably superhuman for most engineering tasks), LLMs just search over potential writers of text, with RLHF selecting from the space of agents that have goals only over text completion. They can model the world, but since they start out modelling text, that’s what their goals range over, even up to considerably superhuman competence at a wide range of tasks. They don’t want things in the real world, and only model it to get more accurate text predictions. Therefore, you can just ask RLHF’d GPT-10, “what’s the permanent alignment solution?”, and it’ll tell you.

People still sometimes say, “doesn’t this require us to get unreasonably, impossibly lucky with generalisation?”. No, it requires luck but you can’t say it’s unbelievable impossible luck just based on not knowing how generalisation works. I also think recent evidence (LLMs getting better at modelling the world without developing goals over it) suggests this world is a bit more likely than it seemed years ago as Paul Christiano argues here:

“I think that a system may not even be able to "want" things in the behaviorist sense, and this is correlated with being unable to solve long-horizon tasks. So if you think that systems can't want things or solve long horizon tasks at all, then maybe you shouldn't update at all when they don't appear to want things.”

But that's not really where we are at---AI systems are able to do an increasingly good job of solving increasingly long-horizon tasks. So it just seems like it should obviously be an update, and the answer to the original question 

Could you give an example of a task you don't think AI systems will be able to do before they are "want"-y? At what point would you update, if ever? What kind of engineering project requires an agent to be want-y to accomplish it? Is it something that individual humans can do? (It feels to me like you will give an example like "go to the moon" and that you will still be writing this kind of post even once AI systems have 10x'd the pace of R&D.)

But, again, I’m not making the claim that this favourable generalisation that gets RLHF to work is likely, just that it’s not a random complex hypothesis with no evidence for it that’s therefore near-impossible.

Since we don’t know how generalisation works, we can’t even say “we should have a uniform prior over internal mechanisms which I can describe that could get high reward”. Rather, if you don’t know, you really just don’t know, and the mechanism involving actually learning to be nice to predict niceness, or actually staying in the domain you were initially trained on when planning, might be favoured by inductive biases in training.

But even if you disagree with me on that, the supposed mistake is not (just) as simple as literally conflating the intent of the overseers with the goals that the AI learns, rather there’s a thought that replicating the goals that produced the feedback and simply adopting them as your own is a natural, simple way to learn to predict what the overseer wants even up to fairly superhuman capabilities so it’s what will get learned by default even if it isn’t the globally optimal reward-maximiser. Is this true? Well, I don’t know, but it’s at least a more complicated mistake if false. This point has been made many times in different contexts, there's a summary discussion here that outlines 6 different presentations of this basic idea.

If I had to sum it up, I think that while Nora maybe confuses the map with the territory, Eliezer conflates ignorance with positive knowledge (from ‘we don’t know how generalisation works’ to ‘we should have a strong default uniform prior over every kind of mechanism we could name’). 





 

Then there's Andrew Critch, who I think agrees with and understands the point I’ve just made (that Nora’s argument is not a simple mistake of the map for the territory), but then makes a far more overreaching and unjustifiable claim than Eliezer or Nora in response.

In the Nora/Eliezer case, they were both very confident in their respective models of AI generalisation, which is at least the kind of thing about which you could be extremely confident, should you have strong evidence (which I don’t think we do). Social science and futurism is not one of those things. Critch says,

" I think literally every human institution will probably fail or become fully dehumanized by sometime around (median) 2040."

The "multipolar chaos" prediction, which is that processes like a fast proliferating production web will demolish or corrupt all institutional opposition and send us to dystopia with near-certainty, I just don’t buy.

I’ve read his production web stories and also heard similar arguments from many people, and it’s hard to voice my objections as specific “here’s why your story can’t happen” (I think many of them are at least somewhat plausible, in fact), but I still think there’s a major error of reasoning going on here. I think it’s related to the conjunction fallacy, the sleepwalk bias and possibly not wanting to come across as unreasonably optimistic about our institutions.

Here’s one of the production web stories in brief but you can read it in full along with my old discussion here,

In the future, AI-driven management assistant software revolutionizes industries by automating decision-making processes, including "soft skills" like conflict resolution. This leads to massive job automation, even at high management levels. Companies that don't adopt this technology fall behind. An interconnected "production web" of companies emerges, operating with minimal human intervention and focusing on maximizing production. They develop a self-sustaining economy, using digital currencies and operating beyond human regulatory reach. Over time, these companies, driven by their AI-optimized objectives, inadvertently prioritize their production goals over human welfare. This misalignment leads to the depletion of essential resources like arable land and drinking water, ultimately threatening human survival, as humanity becomes unable to influence or stop these autonomous corporate entities.

My object-level response is to say something mundane along the lines of, I think each of the following is more or less independent and not extremely unlikely to occur (each is above 1% likely):

  • Wouldn’t governments and regulators also have access to AI systems to aid with oversight and especially with predicting the future? Remember, in this world we have pseudo-aligned AI systems that will more or less do what their overseers want in the short term.
  • Couldn’t a political candidate ask their (aligned) strategist-AI ‘are we all going to be killed by this process in 20 years’ and then make a persuasive campaign to change the public’s mind with this early in the process, using obvious evidence to their advantage
  • If the world is alarmed by the expanding production web and governments have a lot of hard power initially, why will enforcement necessarily be ineffective? If there’s a shadow economy of digital payments, just arrest anyone found dealing with a rogue AI system. This would scare a lot of people.
  • We’ve already seen pessimistic views about what AI regulations can achieve self-confessedly be falsified at the 98% level - there’s sleepwalk bias to consider. Stefan schubert: Yeah, if people think the policy response is "99th-percentile-in-2018", then that suggests their models have been seriously wrong. So maybe the regulations will be both effective, foresightful and well implemented with AI systems forseeing the long-run consequences of decisions and backing them up.
  • What if the lead project is unitary and a singleton or the few lead projects quickly band together because they’re foresightful, so none of this race to the bottom stuff happens in the first place?
  • If it gets to the point where water or the oxygen in the atmosphere is being used up (why would that happen again, why wouldn’t it just be easier for the machines to fly off into space and not have to deal with the presumed disvalue of doing something their original overseers didn’t like?) did nobody build in ‘off switches’?
  • Even if they aren’t fulfilling our values perfectly, wouldn’t the production web just reach some equilibrium where it’s skimming off a small amount of resources to placate its overseers (since its various components are at least somewhat beholden to them) while expanding further and further?

And I already know the response is just going to be “Moloch wouldn’t let that happen..” and that eventually competition will mean that all of these barriers disappear. At this point though I think that such a response is too broad and proves too much. If you use the moloch idea this way it becomes the classic mistaken "one big idea universal theory of history" which can explain nearly any outcome so long as it doesn't have to predict it.

A further point: I think that someone using this kind of reasoning in 1830 would have very confidently predicted that the world of 2023 would be a horrible dystopia where wages for workers wouldn’t have improved at all because of moloch.

I agree that it’s somewhat easier for me to write a realistic science fiction story set in 2045 that’s dystopian compared to utopian, assuming pseudo-aligned AGI and no wars or other obvious catastrophic misuse. As a broader point, I along with the great majority of people, don’t really want this transition to happen either way, and there are many aspects of the ‘mediocre/utopian’ futures that would be suboptimal, so I get why the future forecasts don’t ever look normal or low-risk.

But I think all this speculation tells us very little with confidence what the default future looks like. I don’t think a dystopian economic race to the bottom is extremely unlikely, and with Matthew Barnett I am worried about what values and interests will influence AI development and think the case for being concerned about whether our institutions will hold is strong.

But saying that moloch is a deterministic law of nature such that we can be near-certain of the outcome is not justifiable. This is not even the character of predictions about which you can have such certainty.

Also, in this case I think that a reference class/outside view objection that this resembles failed doomsday predictions of the past is warranted.

I don’t agree that these objections have much weight when we’re concerned about misaligned AI takeover as that has a clear, singular obvious mechanism to be worried about.

However, for ‘molochian race to the bottom multipolar chaos’, it does have the characteristic of ignoring or dismissing endogenous responses, society seeing what’s happening and deciding not to go down that path, or just unknown unknowns that we saw with past failed doomsday predictions. This I see as absolutely in the same reference class as people who in past decades were certain of overpopulation catastrophes or the people now who are certain or think a civilizational collapse from the effects of climate change are likely. It’s taking current trends and drawing mental straight lines on them to extreme heights decades in the future.

Comment by Sammy Martin (SDM) on AI Risk and the US Presidential Candidates · 2024-01-08T11:47:00.965Z · LW · GW

I also expect that if implemented the plans in things like Project 2025 would impair the ability of the government to hire civil servants who are qualified and probably just degrade the US Government's ability to handle complicated new things of any sort across the board.

Comment by Sammy Martin (SDM) on Deceptive AI ≠ Deceptively-aligned AI · 2024-01-08T11:44:21.849Z · LW · GW

If you want a specific practical example of the difference between the two: we now have AIs capable of being deceptive when not specifically instructed to do so ('strategic deception') but not developing deceptive power-seeking goals completely opposite what the overseer wants of them ('deceptive misalignment'). This from Apollo research on Strategic Deception is the former not the latter,

https://www.apolloresearch.ai/research/summit-demo

Comment by Sammy Martin (SDM) on AI #37: Moving Too Fast · 2023-11-10T12:25:57.068Z · LW · GW

Doc Xardoc reports back on the Chinese alignment overview paper that it mostly treats alignment as an incidental engineering problem, at about a 2.5 on a 1-10 scale with Yudkowsky being 10

I'm pretty sure Yudkowsky is at around an 8.5 actually (I think he thinks it's not impossible in principle for ML like systems but maybe it is). 10 would be impossible in principle.

Comment by Sammy Martin (SDM) on On the UK Summit · 2023-11-07T15:13:03.609Z · LW · GW

I think that aside from the declaration and the promise for more summits the creation of the AI Safety Institute and its remit are really good, explicitly mentioning auto-replication and deception evals and planning to work with the likes of Apollo Research and ARC evals to test for:

Abilities and tendencies that might lead to loss of control, such as deceiving human operators, autonomously replicating, and adapting to human attempts to intervene

Also, NIST is proposing something similar.

I find this especially interesting because we now know that in the absence of any empirical evidence of any instance of deceptive alignment at least one major government is directing resources to developing deception evals anyway. If your model of politics doesn't allow for this or considers it unlikely then you need to reevaluate it as Matthew Barnett said.

Additionally, the NIST consortium and AI Safety Institute both strike me as useful national-level implementations of the 'AI risk evaluation consortium' idea proposed by TFI.

 

King Charles notes (0:43 clip) that AI is getting very powerful and that dealing with it requires international coordination and cooperation. Good as far as it goes.

I find it amusing that for the first time in hundreds of years a king is once again concerned about superhuman non-physical threats (at least if you're a mathematical platonist about algorithms and predict instrumental convergence as a fundamental property of powerful minds) to his kingdom and the lives of his subjects. :)

Comment by Sammy Martin (SDM) on Pivotal Acts might Not be what You Think they are · 2023-11-06T14:14:26.196Z · LW · GW

I don't like the term pivotal act because it implies without justification that the risk elimination has to be a single action. Depending on the details of takeoff speed that may or may not be a requirement but if the final speed is months or longer then almost certainly there will be many actions taken by humans + AI of varying capabilities that together incrementally reduce total risk to low levels. I talk about this in terms of 'positively transformative AI' as the term doesn't bias you towards thinking this has to be a single action, even if nonviolent.

Seeing the risk reduction as a single unitary action, like seeing it as a violent overthrow of all the world's governments, also makes the term seem more authoritarian, crazy, fantastical and off-putting to anyone involved in real world politics so I'd recommend that in our thinking we make both the change you suggest and stop thinking of it as necessarily one action.

Comment by Sammy Martin (SDM) on Reactions to the Executive Order · 2023-11-02T12:58:53.377Z · LW · GW

This as a general phenomenon (underrating strong responses to crises) was something I highlighted (calling it the Morituri Nolumus Mori) with a possible extension to AI all the way back in 2020. And Stefan Schubert has talked about 'sleepwalk bias' even earlier than that as a similar phenomenon.

https://twitter.com/davidmanheim/status/1719046950991938001

https://twitter.com/AaronBergman18/status/1719031282309497238

I think the short explanation as to why we're in some people's 98th percentile world so far (and even my ~60th percentile) for AI governance success is that if was obvious to you how transformative AI would be over the next couple of decades in 2021 and yet nothing happened, it seems like governments are just generally incapable.

The fundamental attribution error makes you think governments are just not on the ball and don't care or lack the capacity to deal with extinction risks, rather than decision makers not understanding obvious-to-you evidence that AI poses an extinction risk. Now that they do understand, they will react accordingly. It doesn't meant that they will react well necessarily, but they will act on their belief in some manner.

Comment by Sammy Martin (SDM) on Is Bjorn Lomborg roughly right about climate change policy? · 2023-09-28T12:03:43.847Z · LW · GW

Lomborg is massively overconfident in his predictions but not exactly less wrong than the implicit mainstream view that the economic impacts will definitely be ruinous enough to justify expensive policies.

It's very hard to know, the major problem is just that the existing climate econ models make so many simplifying assumptions that they're near-useless except for giving pretty handwavy lower bounds on damage, especially when the worst risks to worry about are in correlated disasters and tail risks, and Lomborg makes the mistake of taking them completely literally. I discussed this at length a couple of years ago and John Halstead later wrote a book-length report on what the climate impacts literature can and can't tell us.

Comment by Sammy Martin (SDM) on AI#28: Watching and Waiting · 2023-09-07T22:41:24.693Z · LW · GW

Roon also lays down the beats.

For those who missed the reference

Comment by Sammy Martin (SDM) on Strongest real-world examples supporting AI risk claims? · 2023-09-06T13:19:31.341Z · LW · GW

The ARC evals showing that when given help and a general directive to replicate a GPT-4 based agent was  able to figure out that it ought to lie to a TaskRabbit worker is an example of it figuring out a self-preservation/power-seeking subgoal which is on the road to general self-preservation. But it doesn't demonstrate an AI spontaneously developing self-preservation or power-seeking, as an instrumental subgoal to something that superficially has nothing to do with gaining power or replicating.

Of course we have some real-world examples of specification-gaming like you linked in your answer: those have always existed and we see more 'intelligent' examples like AIs convinced of false facts trying to convince people they're true.

There's supposedly some evidence here that we see power-seeking instrumental subgoals developing spontaneously but how spontaneous this actually was is debatable so I'd call that evidence ambiguous since it wasn't in the wild.

Comment by Sammy Martin (SDM) on A Model-based Approach to AI Existential Risk · 2023-09-06T11:36:59.884Z · LW · GW

>APS is less understood and poorly forecasted compared to AGI. 

I should clarify that I was talking about the definition used by forecasts like the Direct Approach methodology and/or the definition given in the metaculus forecast or in estimates like the Direct Approach. The latter is roughly speaking, capability sufficient to pass a hard adversarial Turing tests and human-like capabilities on enough intellectual tasks as measured by certain tests. This is something that can plausibly be upper bounded by the direct approach methodology which aims to predict when an AI could get a negligible error in predicting what a human expert would say over a specific time horizon. So this forecast is essentially a forecast of 'human-expert-writer-simulator AI', and that is the definition that's used in public elicitations like the metaculus forecasts.

However, I agree with you that while in some of the sources I cite that's how the term is defined it's not what the word denotes (just generality, which e.g. GPT-4 plausibly is for some weak sense of the word), and you also don't get from being able to simulate the writing of any human expert to takeover risk without making many additional assumptions.

Comment by Sammy Martin (SDM) on A Model-based Approach to AI Existential Risk · 2023-08-26T18:12:28.422Z · LW · GW

I guess it is down to Tyler's personal opinion, but would he accept asking IR and defense policy experts on the chance of a war with China as an acceptable strategy or would he insist on mathematical models of their behaviors and responses? To me it's clearly the wrong tool, just as in the climate impacts literature we can't get economic models of e.g. how governments might respond to waves of climate refugees but can consult experts on it.

Comment by Sammy Martin (SDM) on A Model-based Approach to AI Existential Risk · 2023-08-26T17:50:25.884Z · LW · GW

I recently held a workshop with PIBBSS fellows on the MTAIR model and thought some points from the overall discussion were valuable:

The discussants went over various scenarios related to AI takeover, including a superficially aligned system being delegated lots of power and gaining resources by entirely legitimate means, a WFLL2-like automation failure, and swift foom takeover. Some possibilities involved a more covert, silent coup where most of the work was done through manipulation and economic pressure. The concept of "$1T damage" as an intermediate stage to takeover appeared to be an unnatural fit with some of these diverse scenarios. There was some mention of whether mitigation or defensive spending should be considered as part of that $1T figure.

Alignment Difficulty and later steps merge many scenarios

The discussants interpreted "alignment is hard" (step 3) as implying that alignment is sufficiently hard that (given that APS is built), at least one APS is misaligned somewhere, and also that there's some reasonable probability that any given deployed APS is unaligned. This is the best way of making the whole statement deductively valid.

However, proposition 3 being true doesn't preclude the existence of other aligned APS AI (hard alignment and at least one unaligned APS might mean that there are leading conscientious aligned APS projects but unaligned reckless competitors). This makes discussion of the subsequent questions harder, as we have to condition on there possibly being aligned APS present as well which might reduce the risk of takeover.

This means that when assessing proposition 4, we have to condition on some worlds where aligned APS has already been deployed and used for defense, some where there have been warning shots and strong responses without APS, some where misaligned APS emerges out of nowhere and FOOMs too quickly for any response, and a slow takeoff where nonetheless every system is misaligned and there is a WFLL2 like takeover attempt, and add up the chance of large scale damage in all of these scenarios, conditioning on their probability, which makes coming to an overall answer to 4 and 5 challenging.

Definitions are value-laden and don't overlap: TAI, AGI, APS

We differentiated between Transformative AI (TAI), defined by Karnofsky, Barnett and Cotra entirely by its impact on the world, which can either be highly destructive or drive very rapid economic growth; General AI (AGI), defined through a variety of benchmarks including passing hard adversarial Turing tests and human-like capabilities on enough intellectual tasks; and APS, which focuses on long-term planning and human-like abilities only on takeover-relevant tasks. We also mentioned Paul Christiano's notion of the relevant metric being AI 'as economically impactful as a simulation of any human expert' which technically blends the definitions of AGI and TAI (since it doesn't necessarily require very fast growth but implies it strongly). Researchers disagree quite a lot on even which of these are harder: Daniel Kokotaljo has argued that APS likely comes before TAI and maybe even before (the Matthew Barnett definition of) AGI, while e.g. Barnett thinks that TAI comes after AGI with APS AI somewhere in the middle (and possibly coincident with TAI).

In particular, some definitions of ‘AGI’, i.e. human-level performance on a wide range of tasks, could be much less than what is required for APS depending on what the specified task range is. If the human-level performance is only on selections of tasks that aren’t useful for outcompeting humans strategically (which could still be very many tasks, for example, human-level performance on everything that requires under a minute of thinking), the 'AGI system' could almost entirely lack the capabilities associated with APS. However, most of the estimates that could be used in a timelines estimate will revolve around AGI predictions (since they will be estimates of performance or accuracy benchmarks), which we risk anchoring on if we try to adjust them to predict the different milestones of APS.

In general it is challenging to use the probabilities from one metric like TAI to inform other predictions like APS, because each definition includes many assumptions about things that don't have much to do with AI progress (like how qualitatively powerful intelligence is in the real world, what capabilities are needed for takeover, what bottlenecks are there to economic automation or research automation etc.) In other words, APS and TAI are value-laden terms that include many assumptions about the strategic situation with respect to AI takeover, world economy and likely responses.

APS is less understood and more poorly forecasted compared to AGI. Discussants felt the current models for AGI can't be easily adapted for APS timelines or probabilities. APS carries much of the weight in the assessment due to its specific properties: i.e. many skeptics might argue that even if AGI is built, things which don't meet the definition of APS might not be built.

Alignment and Deployment Decisions

Several discussants suggested splitting the model’s third proposition into two separate components: one focusing on the likelihood of building misaligned APS systems (3a) and the other on the difficulty of creating aligned ones (3b). This would allow a more nuanced understanding of how alignment difficulties influence deployment decisions. They also emphasized that detection of misalignment would impact deployment, which wasn't sufficiently clarified in the original model. 

Advanced Capabilities

There was a consensus that 'advanced capabilities' as a term is too vague. The discussants appreciated the attempt to narrow it down to strategic awareness and advanced planning but suggested breaking it down even further into more measurable skills, like hacking ability, economic manipulation, or propaganda dissemination. There are, however, disagreements regarding which capabilities are most critical (which can be seen as further disagreements about the difficulty of APS relative to AGI).

If strategic awareness comes before advanced planning, we might see AI systems capable of manipulating people, but not in ways that greatly exceed human manipulative abilities. As a result, these manipulations could potentially be detected and mitigated and even serve as warning signs that lower total risk. On the other hand, if advanced capabilities develop before strategic awareness or advanced planning, we could encounter AI systems that may not fully understand the world or their position in it, nor possess the ability to plan effectively. Nevertheless, these systems might still be capable of taking single, highly dangerous actions, such as designing and releasing a bioweapon.

Outside View & Futurism Reliability

We didn't cover the outside view considerations extensively, but various issues under the "accuracy of futurism" umbrella arose which weren't specifically mentioned.

The fact that markets don't seem to have reacted as if Transformative AI is a near-term prospect, and the lack of wide scale scrutiny and robust engagement with risk arguments (especially those around alignment difficulty), were highlighted as reasons to doubt this kind of projection further.

The Fermi Paradox implies a form of X-risk that is self-destructive and not that compatible with AI takeover worries, while market interest rates also push the probability of such risks downward. The discussants recommended placing more weight on outside priors than we did in the default setting for the model, suggesting a 1:1 ratio compared to the model's internal estimations.

Discussants also agreed with the need to balance pessimistic surviva- is-conjunctive views and optimistic survival-is-disjunctive views, arguing that the Carlsmith model is biased towards optimism and survival being disjunctive but that the correct solution is not to simply switch to a pessimism-biased survival is conjunctive model in response.

Difficult to separate takeover from structural risk

There's a tendency to focus exclusively on the risks associated with misaligned APS systems seeking power, which can introduce a bias towards survival being predicated solely on avoiding APS takeover. However, this overlooks other existential risk scenarios that are more structural. There are potential situations without agentic power-seeking behavior but characterized by rapid changes could for less causally clear reasons include technological advancements or societal shifts that may not necessarily have a 'bad actor' but could still spiral into existential catastrophe. This post describes some of these scenarios in more detail.

Comment by Sammy Martin (SDM) on AI Regulation May Be More Important Than AI Alignment For Existential Safety · 2023-08-25T13:45:02.010Z · LW · GW

This is a serious problem, but it is under active investigation at the moment, and the binary of regulation or pivotal act is a false dichotomy. Most approaches that I've heard of rely on some combination of positively transformative AI tech (basically lots of TAI technologies that reduce risks bit by bit, overall adding up to an equivalent of a pivotal act) and regulation to give time for the technologies to be used to strengthen the regulatory regime in various ways or improve the balance of defense over offense, until eventually we transition to a totally secure future: though of course this assumes at least (somewhat) slow takeoff.

You can see these interventions as acting on the conditional probabilities 4) and 5) in our model by driving down the chance that assuming misaligned APS is deployed,  it can cause large-scale disasters.

3) It will be much harder to build APS systems that do not seek power in misaligned ways than to build superficially useful APS systems that do seek power in misaligned ways,

4) Misaligned APS systems will be capable of causing a large global catastrophe upon deployment,
5) The human response to misaligned APS systems causing such a catastrophe will not be sufficient to prevent it from taking over completely,
6) Having taken over, the misaligned APS system will destroy or severely curtail the potential of humanity.

This hasn't been laid out in lots of realistic detail yet not least because most AI governance people are currently focused on near-term actions like making sure the regulations are actually effective, because that's the most urgent task. But this doesn't reflect a belief that regulations alone are enough to keep us safe indefinitely.

Holden Karnofsky has written on this problem extensively,

Comment by Sammy Martin (SDM) on A Model-based Approach to AI Existential Risk · 2023-08-25T13:00:17.859Z · LW · GW

Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?'...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done.

I think that MTAIR plausibly is a model of the 'process of how the world is supposed to end', in the sense that it runs through causal steps where each individual thing is conditioned on the previous thing (APS is developed, APS is misaligned, given misalignment it causes damage on deployment, given that the damage is unrecoverable), and for some of those inputs your probabilities and uncertainty distribution could itself come from a detailed causal model (e.g. you can look at the Direct Approach for the first two questions.

For the later questions, like e.g. what's the probability that an unaligned APS can inflict large disasters given that it is deployed, we can enumerate ways that it could happen in detail but to assess their probability you'd need to do a risk assessment with experts not produce a mathematical model.

E.g. you wouldn't have a "mathematical model" of how likely a US-China war over Taiwan is, you'd do wargaming and ask experts or maybe superforecasters. Similarly, for the example that he gave which was COVID there was a part of this that was a straightforward SEIR model and then a part that was more sociological talking about how the public response works (though of course a lot of the "behavioral science" then turned out to be wrong!).

So a correct 'mathematical model of the process' if we're being fair, would use explicit technical models for technical questions and for sociological/political/wargaming questions you'd use other methods. I don't think he'd say that there's no 'mathematical model' of nuclear war because while we have mathematical models of how fission and fusion works, we don't have any for how likely it is that e.g. Iran's leadership decides to start building nuclear weapons.

I think Tyler Cowen would accept that as sufficiently rigorous in that domain, and I believe that the earlier purely technical questions can be obtained from explicit models. One addition that could strengthen the model is to explicitly spell out different scenarios for each step (e.g. APS causes damage via autonomous weapons, economic disruption, etc). But the core framework seems sufficient as is, and also those concerns have been explained in other places.

What do you think?

Comment by Sammy Martin (SDM) on Could We Automate AI Alignment Research? · 2023-08-23T09:20:32.540Z · LW · GW

The alignment difficulty scale is based on this post.

I really like this post and think it's a useful addendum to my own alignment difficulty scale (making it 2D, essentially). But I think I was conceptualizing my scale as running along the diagonal line you provide from GPT-4 to sovereign AI. But I think your way of doing it is better on reflection.

In my original post when I suggested that the 'target' level of capability we care about is the capability level needed to build positively transformative AI (pTAI), which is essentially the 'minimal aligned AGI that can do a pivotal act' notion but is more agnostic about whether it will be a unitary agentic system or many systems deployed over a period. 

I think that what most people talk about when they talk about alignment difficulty isn't how hard the problem 'ultimately' is but rather how hard the problem is that we need to solve, with disagreements also being about e.g. how capable an AI you need for various pivotal/positively transformative acts.

I didn't split these up because I think that in a lot of people's minds the two run together in a fairly unprincipled way, but if we want a scale that corresponds to real things in the world having a 2D chart like this is better.

Comment by Sammy Martin (SDM) on Ten Levels of AI Alignment Difficulty · 2023-08-09T11:03:51.147Z · LW · GW

Update

This helpful article by Holden Karnofsky also describes an increasing scale of alignment difficulty, although it's focused on a narrower range of the scale than mine (his scale covers 4-7) and is a bit more detailed about the underlying causes of the misalignment. Here's how my scale relates to his:

The "playing the training game" threat model, where systems behave deceptively only to optimize in-episode reward, corresponds to an alignment difficulty level of 4 or higher. This is because scalable oversight without interpretability tools (level 4) should be sufficient to detect and address this failure mode. The AI may pretend to be helpful during training episodes, but oversight exposing it to new situations will reveal its tendency toward deception. 

(Situationally aware) Deception by default corresponds to a difficulty level of 6. If misaligned AIs form complex inner goals and engage in long-term deception, then scalable oversight alone will not catch intentionally deceptive systems that can maintain consistent deceitful behavior. Only interpretability tools used as part of the oversight process (level 6) give us the ability to look inside the system and identify deceptive thought patterns and tendencies.

Finally, the gradient hacking threat model, where AIs actively manipulate their training to prevent alignment, represents an alignment difficulty of 7 or higher. Even interpretability-based oversight can be defeated by sufficiently sophisticated systems that alter their internals to dodge detection.

Comment by Sammy Martin (SDM) on 3 levels of threat obfuscation · 2023-08-02T16:00:49.370Z · LW · GW

I think that, on the categorization I provided,

'Playing the training game' at all corresponds to an alignment difficulty level of 4 because better than human behavioral feedback and oversight can reveal it and you don't need interpretability.

(Situationally aware) Deception by default corresponds to a difficulty level of 6 because if it's sufficiently capable no behavioral feedback will work and you need interpretability-based oversight

Gradient hacking by default corresponds to a difficulty level of 7 because the system will also fight interpretability based oversight and you need to think of something clever probably through new research at 'crunch time'.

Comment by Sammy Martin (SDM) on Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy · 2023-07-26T17:47:29.120Z · LW · GW

You're right, I've reread the section and that was a slight misunderstanding on my part.

Even so I still think it falls at a 7 on my scale as it's a way of experimentally validating oversight processes that gives you some evidence about how they'll work in unseen situations.

Comment by Sammy Martin (SDM) on Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy · 2023-07-26T17:41:07.379Z · LW · GW

In the sense that there has to be an analogy between low and high capabilities somewhere, even if at the meta level.

This method lets you catch dangerous models that can break oversight processes for the same fundamental reasons as less dangerous models, not just for the same inputs.

Comment by Sammy Martin (SDM) on Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy · 2023-07-26T17:26:28.979Z · LW · GW

Excellent! In particular, it seems like oversight techniques which can pass tests like these could work in worlds where alignment is very difficult, so long as AI progress doesn't involve a discontinuity so huge that local validity tells you nothing useful (such that there are no analogies between low and high capability regimes).

I'd say this corresponds to 7 on my alignment difficulty table.

Comment by Sammy Martin (SDM) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T13:49:07.343Z · LW · GW

There's a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let's adopt moral uncertainty over all of the philosophically difficult premises too, so let's say there's only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.

...and probably the math still works out very unfavorably.

I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.

Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can't draw many deep abiding conclusions about human empathy or the "worthiness" of human civilization (whatever that really means) from how we treat fish

Comment by Sammy Martin (SDM) on SDM's Shortform · 2023-07-26T12:03:41.042Z · LW · GW

https://blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-forum/

Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.

The core objectives for the Forum are:

  1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
  2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
  3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
  4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

This seems overall very good at first glance, and then seems much better once I realized that Meta is not on the list. There's nothing here that I'd call substantial capabilities acceleration (i.e. attempts to collaborate on building larger and larger foundation models, though some of this could be construed as making foundation models more useful for specific tasks). Sharing safety-capabilities research like better oversight or CAI techniques is plausibly strongly net positive even if the techniques don't scale indefinitely. By the same logic, while this by itself is nowhere near sufficient to get us AI existential safety if alignment is very hard (and could increase complacency), it's still a big step in the right direction.

adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.

The mention of combating cyber threats is also a step towards explicit pTAI

BUT, crucially, because Meta is frozen out we can know both that this partnership isn't toothless, represents a commitment to not do the most risky and antisocial things Meta presumably doesn't want to give up, and the fact that they're the only major AI company in the US to not join will be horrible PR for them as well. 

Comment by Sammy Martin (SDM) on AI #21: The Cup Overfloweth · 2023-07-21T14:49:13.530Z · LW · GW

Once I am caught up I intend to get my full Barbieheimer on some time next week, whether or not I do one right after the other. I’ll respond after. Both halves matter – remember that you need something to protect.

That's why it has to be Oppenheimer first, then Barbie. :)

When I look at the report, I do not see any questions about 2100 that are more ‘normal’ such as the size of the economy, or population growth, other than the global temperature, which is expected to be actual unchanged from AGI that is 75% to arrive by then. So AGI not only isn’t going to vent the atmosphere and boil the oceans or create a Dyson sphere, it also isn’t going to design us superior power plants or forms of carbon capture or safe geoengineering. This is a sterile AGI.

This doesn't feel like much of a slam dunk to me. If you think very transformative AI will be highly distributed, safe by default (i.e. 1-3 on the table) and arise on the slowest end of what seems possible, then maybe we don't coordinate to radically fix the climate and we just use TAI to adapt well individually, decarbonize and get fusion power and spaceships but don't fix the environment or melt the earth and just kind of leave it be because we can't coordinate well enough to agree on a solution. Honestly that seems not all that unlikely, assuming alignment, slow takeoff and a mediocre outcome.

If they'd asked about GDP and they'd just regurgitated the numbers given by the business as usual UN forecast after just being queried about AGI, then it would be a slam dunk that they're not thinking it through (unless they said something very compelling!). But to me while parts of their reasoning feel hard to follow there's nothing clearly crazy.

The view that the Superforecasters take seems to be something like "I know all these benchmarks seem to imply we can't be more than a low number of decades off powerful AI and these arguments and experiments imply super-intelligence should be soon after and could be unaligned, but I don't care, it all leads to an insane conclusion, so that just means the benchmarks are bullshit, or that one of the 'less likely' ways the arguments could be wrong is correct. (Note that they didn't disagree on the actual forecasts of what the benchmark scores would be, only their meaning!)

One thing I can say is that it very much reminds me of Da Shi in the novel Three Body Problem (who - and I know this is fictional evidence - ended up being entirely right in this interaction that the supposed 'miracle' of the CMB flickering was a piece of trickery)

You think that's not enough for me to worry about? You think I've got the energy to gaze at stars and philosophize?"

"You're right. All right, drink up!"

"But, I did indeed invent an ultimate rule."

"Tell me."

"Anything sufficiently weird must be fishy."

"What... what kind of crappy rule is that?"

"I'm saying that there's always someone behind things that don't seem to have an explanation."

"If you had even basic knowledge of science, you'd know it's impossible for any force to accomplish the things I experienced. Especially that last one. To manipulate things at the scale of the universe—not only can you not explain it with our current science, I couldn't even imagine how to explain it outside of science. It's more than supernatural. It's super-I-don't-know-what...."

"I'm telling you, that's bullshit. I've seen plenty of weird things."

"Then tell me what I should do next."

"Keep on drinking. And then sleep."

Comment by Sammy Martin (SDM) on AI #19: Hofstadter, Sutskever, Leike · 2023-07-10T16:01:46.426Z · LW · GW

I think that before this announcement I'd have said that OpenAI was at around a 2.5 and Anthropic around a 3 in terms of what they've actually applied to existing models (which imo is fine for now, I think that doing more to things at GPT-4 capability levels is mostly wasted effort in terms of current safety impacts), though prior to the superalignment announcement I'd have said openAI and anthropic were both aiming at a 5, i.e. oversight with research assistance, and Deepmind's stated plan was the best at a 6.5 (involving lots of interpretability and some experiments). Now OpenAI is also aiming at a 6.5 and Anthropic now seems to be the laggard and still at a 5 unless I've missed something.

However the best currently feasible plan is still slightly better than either. I think e.g. very close integration of the deception and capability evals from the ARC and Apollo research teams into an experiment workflow isn't in either plan and should be, and would bump either up to a 7.

 

I don't see how anyone can have high justifiable confidence that it's zero instead of epsilon, given our general state of knowledge/confusion about philosophy of mind/consciousness, axiology, metaethics, metaphilosophy.

I tend to agree with Zvi's conclusion although I also agree with you that I don't know that it's definitely zero. I think it's unlikely (subjectively like under a percent) that the real truth about axiology says that insects in bliss are an absolute good, but I can't rule it out like I can rule out winning the lottery because no-one can trust reasoning in this domain that much.

What I'd say is just in general in 'weird' domains (AI Strategy, thinking about longtermist prioritization, metaethics) because the stakes are large and the questions so uncertain, you run into a really large number of "unlikely but not really unlikely enough to honestly call it a pascal's mugging" considerations, things you'd subjectively say are under 1% likely but over one in a million or so. I think the correct response in these uncertain domains is to mostly just ignore them like you'd ignore things that are under one in a billion in a more certain and clear domain like construction engineering.

Comment by Sammy Martin (SDM) on Consider Joining the UK Foundation Model Taskforce · 2023-07-10T15:51:20.445Z · LW · GW

The taskforce represents a startup government mindset that makes me optimistic

I would say it's not just potentially a startup government mindset in the abstract but rather an attempt to repeat a specific, preexisting highly successful example of startup government, namely the UK's covid vaccine task force which was name checked in the original Foundation model task force announcement.

That was also fast-moving attempt to solve a novel problem that regular scientific institutions were doing badly at and it substantially beat expectations, and was run under an administration that has a lot of overlap with the current administration, with the major exception being a more stable and reasonable PM at the top (Sunak not Boris) and no Dominic Cummings involved.

Comment by Sammy Martin (SDM) on Ten Levels of AI Alignment Difficulty · 2023-07-10T15:33:05.609Z · LW · GW

This is plausibly true for some solutions this research could produce like e.g. some new method of soft optimization, but might not be in all cases.

For levels 4-6 especially the pTAI that's capable of e.g. automating alignment research or substantially reducing the risks of unaligned TAI might lack some of the expected 'general intelligence' of AIs post SLT and be too unintelligent for techniques that rely on it having complete strategic awareness, self-reflection, a consistent decision theory, the ability to self improve or other post SLT characteristics.

One (unrealistic) example, if we have a technique for fully loading the human CEV into a superintelligence ready to go that works for levels 8 or 9, that may well not help at all with improving scalable oversight of non-superintelligent pTAI which is incapable of representing the full human value function.

Comment by Sammy Martin (SDM) on Ten Levels of AI Alignment Difficulty · 2023-07-07T20:07:46.085Z · LW · GW

Thanks for this! Will add to the post, was looking for sources on this scenario

Comment by Sammy Martin (SDM) on AI #19: Hofstadter, Sutskever, Leike · 2023-07-06T15:40:31.173Z · LW · GW

Will this be an alignment effort that can work, an alignment effort that cannot possibly work, an initial doomed effort capable of pivoting, or a capabilities push in alignment clothing?

As written, and assuming that the description is accurate and means what I think it means this seems like a very substantial advance in marginal safety. I'd say it seems like it's aimed at a difficulty level of 5 to 7 on my table,

https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty#Table

I.e. experimentation on dangerous systems and interpretability play some role but the main thrust is automating alignment research and oversight, so maybe I'd unscientifically call it a 6.5, which is a tremendous step up from the current state of things (2.5) and would solve alignment in many possible worlds.

As ever though, if it doesn't work then it doesn't just not work, but rather will just make it look like the problem's been solved and advance capabilities.

Comment by Sammy Martin (SDM) on Ten Levels of AI Alignment Difficulty · 2023-07-06T11:38:06.495Z · LW · GW

deception induced by human feedback does not require strategic awareness - e.g. that thing with the hand which looks like it's grabbing a ball but isn't is a good example. So human-feedback-induced deception is more likely to occur, and to occur earlier in development, than deception from strategic awareness

The phenomenon that a 'better' technique is actually worse than a 'worse' technique if both are insufficient is something I talk about in a later section of the post and I specifically mention RLHF. I think this holds true in general throughout the scale, e.g. Eliezer and Nate have said that even complex interpretability-based oversight with robustness testing and AI research assistance is also just incentivizing more and better deception, so this isn't unique to RLHF.

But I tend to agree with Richard's view in his discussion with you under that post that while if you condition on deception occurring by default RLHF is worse than just prompting (i.e. prompting is better in harder worlds), RLHF is better than just prompting in easy worlds. I also wouldn't call non-strategically aware pursuit of inaccurate proxies for what we want 'deception', because in this scenario the system isn't being intentionally deceptive.

In easy worlds, the proxies RLHF learns are good enough in practice and cases like the famous thing with the hand which looks like it's grabbing a ball but isn't just disappear if you're diligent enough with how you provide feedback. In that world, not using RLHF would get systems pursuing cruder and worse proxies for what we want that fail often (e.g. systems just overtly lie to you all the time, say and do random things etc.). I think that's more or less the situation we're in right now with current AIs!

If the proxies that RLHF ends up pursuing are in fact close enough, then RLHF works and will make systems behave more reliably and be harder to e.g. jailbreak or provoke into random antisocial behavior than with just prompting. I did flag in a footnote that the 'you get what you measure' problem that RLHF produces could also be very difficult to deal with for structural or institutional reasons.

 

Next up, I'd put "Experiments with Potentially Catastrophic Systems to Understand Misalignment" as 4th-hardest world. If we can safely experiment with potentially-dangerous systems in e.g. a sandbox, and that actually works (i.e. the system doesn't notice when it's in testing and deceptively behave itself, or otherwise generalize in ways the testing doesn't reveal), then we don't really need oversight tools in the first place. 

I'm assuming you meant fourth-easiest here not fourth hardest. It's important to note that I'm not here talking about testing systems to see if they misbehave in a sandbox and then if they don't assuming you've solved the problem and deploying. Rather, I'm talking about doing science with models that exhibit misaligned power seeking, with the idea being that we learn general rules about e.g. how specific architectures generalize, why certain phenomena arise etc. that are theoretically sound and we expect to hold true even post deployment with much more powerful systems. Incidentally this seems quite similar to what the OpenAI superalignment team is apparently planning.

So it's basically, "can we build a science of alignment through a mix of experimentation and theory". So if e.g. we study in a lab setting a model that's been fooled into thinking it's been deployed, then commits a treacherous turn, enough times we can figure out the underlying cause of the behavior and maybe get new foundational insights? Maybe we can try to deliberately get AIs to exhibit misalignment and learn from that. It's hard to anticipate in advance what scientific discoveries will and won't tell you about systems, and I think we've already seen cases of experiment-driven theoretical insights, like simulator theory, that seem to offer new handles for solving alignment. How much quicker and how much more useful will these be if we get the chance to experiment on very powerful systems?

Comment by Sammy Martin (SDM) on Ten Levels of AI Alignment Difficulty · 2023-07-06T11:04:58.423Z · LW · GW

You're right that I think this is more useful as an unscientific way for (probably less technical governance and strategy people) to orientate towards AI alignment than for actually carving up reality. I wrote the post with that audience and that framing in mind. By the same logic, your chart of how difficult various injuries and diseases are to fix would be very useful e.g. as a poster in a military triage tent even if it isn't useful for biologists or trained doctors.

However, while I didn't explore the idea much I do think that it is possible to cash this scale out as an actual variable related to system behavior, something along the lines of 'how adversarial are systems/how many extra bits of optimization over and above behavioral feedback are needed'. See here for further discussion on that. Evan Hubinger also talked in a bit more detail about what might be computationally different about ML models in low vs high adversarialness worlds here.

Comment by Sammy Martin (SDM) on [Linkpost] Introducing Superalignment · 2023-07-06T10:44:45.670Z · LW · GW

Very nice! I'd say this seems like it's aimed at a difficulty level of 5 to 7 on my table,

https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty#Table

I.e. experimentation on dangerous systems and interpretability play some role but the main thrust is automating alignment research and oversight, so maybe I'd unscientifically call it a 6.5, which is a tremendous step up from the current state of things (2.5) and would solve alignment in many possible worlds.

Comment by Sammy Martin (SDM) on Some quotes from Tuesday's Senate hearing on AI · 2023-05-17T22:39:17.218Z · LW · GW

One absolutely key thing got loudly promoted: that all cutting edge models should be evaluated for potentially dangerous properties. As far as I can tell no one objected to this

Comment by Sammy Martin (SDM) on Steering GPT-2-XL by adding an activation vector · 2023-05-17T22:10:55.487Z · LW · GW

This strikes me as a very preliminary bludgeon version of the holy grail of mechanistic interpretability, which is to say actually understanding and being able to manipulate the specific concepts that an AI model uses

Comment by Sammy Martin (SDM) on rohinmshah's Shortform · 2022-05-16T12:29:07.330Z · LW · GW

Essentially, the problem is that 'evidence that shifts Bio Anchors weightings' is quite different, more restricted, and much harder to define than the straightforward 'evidence of impressive capabilities'. However, the reason that I think it's worth checking if new results are updates is that some impressive capabilities might be ones that shift bio anchors weightings. But impressiveness by itself tells you very little.

I think a lot of people with very short timelines are imagining the only possible alternative view as being 'another AI winter, scaling laws bend, and we don't get excellent human-level performance on short term language-specified tasks anytime soon', and don't see the further question of figuring out exactly what human-level on e.g. MMLI would imply.

This is because the alternative to very short timelines from (your weightings on) Bio Anchors isn't another AI winter, rather it's that we do get all those short-term capabilities soon, but have to wait a while longer to crack long-term agentic planning because that doesn't come "for free" from competence on short-term tasks, if you're as sample-inefficient as current ML is.

So what we're really looking for isn't systems getting progressively better and better at short-horizon language tasks. That's something that either the lifetime-anchor Bio Anchors view or the original Bio Anchors view predicts, and we need something that discriminates between the two.

We have some (indirect) evidence that original bio anchors is right: namely that it being wrong implies evolution missed an obvious open goal to make bees and mice generally intelligent long term planners, and that human beings generally aren't vastly better than evolution at designing things anyway, and the lifetime anchor would imply that AGI is a glaring exception to this general trend.

As evidence, this has the advantage of being about something that really happened: human beings are the only human-level general intelligence that exists so far, so we have very good reasons to think matching the human brain is sufficient. However, it has the disadvantage of all the usual disanalogies between evolution and its requirements, and human designers and our requirements. Maybe this just is one of those situations where we can outdo evolution: that's not especially unlikely.

 

What's the evidence on the other side (i.e. against original bio anchors and for the lifetime anchor)?

There are two kinds that I tend to hear. One is that short-horizon competence is enough for dangerous/transformative capabilities. E.g. the claim that if you can build something that's "human level/superhuman at charisma/persuasion/propaganda/manipulation, at least on short timescales" that represents a gigantic existential risk factor that condemns us to disaster further down the line (the AI PONR idea), or that at this point actors with bad incentives will be far too influential/wealthy/advancing the SOTA in AI.

However, I'd consider this changing the subject: essentially it's not an argument for AGI takeover soon, rather it's an argument for 'certain narrow AIs are far more dangerous than you realize'. That means you have to go all the way back to the start and argue for why such things would be catastrophic in the first place. We can't rely on the simple "it'll be superintelligent and seize a DSA".

Suppose we get such narrow AIs, that can do most short-term tasks for which there's data, but don't generalize to long horizons consistently. This scenario 10 years from now looks something like: AI automates away lots of jobs, can do certain kinds of short-term persuasion and manipulation, can speed up capabilities and alignment research, but not fully replace human researchers. Some of these AIs are agentic and possibly also misaligned (in ways that are detectable and fall far short of the ability to take over, since by assumption they aren't competitive with humans at long-term planning). This certainly seems wild and full of potential danger, where slowing down progress could be much harder. It also looks like a scenario with far more attention on AI alignment than today, where the current funders of alignment research are much wealthier than now, and with plenty of obvious examples of what the problem is to catch people's attention. Overall, it doesn't seem like a scenario where (current AI alignment researchers + whoever else is working on it in 10 years) have considerably less leverage over the future than now: it could easily be more.

 

The other reason for favouring the lifetime anchor is you get long-horizon competence for free once you're excellent at (a given list of) short-horizon tasks. This is arguing, more or less, that for the tasks that matter, current architectures are brainlike in their efficiency, such that the lifetime anchor makes more sense. A lot of the arguments in favour of this have a structure roughly like: look at a wide-ranging comprehension benchmark like MMLI - when an AI is human level on all of this, it'll be able to keep a train of thought running continuously, keep a working memory and plan over very long timescales the same way humans do.

As evidence, this has the significant advantage of being relevant and not having to deal with the vagaries of what tradeoffs evolution may have made differently to human engineers. It has the disadvantage of being fiction. Or at least evidence that's not yet been observed. You see AIs getting more and more impressive at a wider range of short-horizon tasks, which is roughly compatible with either view, but you don't observe the described outcome of them generalizing out to much longer-term tasks than that.

So, to return to the original question, what would count as (additional) evidence in favour of the lifetime anchor? The answer clearly can't be "nothing", since if we build AGI in 5 years, that counts.

I think the answer is, anything that looks like unexpectedly cheap, easy, 'for free' generalization from relatively shorter to relatively longer horizon tasks (e.g. from single reasoning steps to many reasoning steps) without much fine-tuning.

This is different from many of the other signs of impressiveness we've seen recently: just learning lots of shorter-horizon tasks without much transfer between them, being able to point models successfully at particular short-horizon tasks with good prompting, getting much better at a wider range of tasks that can only be done over short horizons. All of these are expected on either view.

This unexpected evidence is very tricky to operationalize. Default bio anchors assumes we'll see a certain degree of generalizing from shorter to longer horizon tasks, and that we'll see AI get better and better sample-efficiency on few-shot tasks, since it assumes that in 20 or so years we'll get enough of such generalization to get AGI. I guess we just need to look for 'more of it than we expected to see'?

That seems very hard to judge, since you can't read off predictions about subhuman capabilities from bio anchors like that.

Comment by Sammy Martin (SDM) on Deepmind's Gato: Generalist Agent · 2022-05-13T10:52:48.976Z · LW · GW

Does that mean the socratic models result from a few weeks ago, which does involve connecting more specialised models together, is a better example of progress?

Comment by Sammy Martin (SDM) on What Would A Fight Between Humanity And AGI Look Like? · 2022-04-06T09:56:35.900Z · LW · GW

The Putin case would be better if he was convincing Russians to make massive sacrifices or do something that will backfire and kill them, like start a war with NATO, and I don't think he has that power - e.g. him rushing to deny that Russia were sending conscripts to Ukraine because of the fear the effect that would have on public opinion

Comment by Sammy Martin (SDM) on Ukraine Post #9: Again · 2022-04-05T21:02:21.401Z · LW · GW

Is Steven Pinker ever going to answer for destroying the Long Peace? https://www.reddit.com/r/slatestarcodex/comments/6ggwap/steven_pinker_jinxes_the_world/

It's really not at all good that were going into a period of much heightened existential risk (from AGI, but also other sources) under cold war like levels of international tension.

Comment by Sammy Martin (SDM) on What Would A Fight Between Humanity And AGI Look Like? · 2022-04-05T20:59:28.416Z · LW · GW

I think there's actually a ton of uncertainty here about just how 'exploitable' human civilization ultimately is. We could imagine that since actual humans (e.g. Hitler) by talking to people have seized large fractions of Earth's resources, we might not need an AI that's all that much smarter than a human. On the other hand, we might just say that attempts like that are filtered through colossal amounts of luck and historical contingency and actually to reliably manipulate your way to controlling most of humanity you'd need to be far smarter than the smartest human.

Comment by Sammy Martin (SDM) on The case for Doing Something Else (if Alignment is doomed) · 2022-04-05T20:08:51.337Z · LW · GW

I think there's a few things that get in the way of doing detailed planning for outcomes where alignment is very hard and takeoff very fast. This post by David Manheim discusses some of the problems: https://www.lesswrong.com/posts/xxMYFKLqiBJZRNoPj

One is that, there's no clarity even among people who've made AI research their professional career about alignment difficulty or takeoff speed. So getting buy in in advance of clear warning signs will be extremely hard.

The other is that the strategies that might help in situations with hard alignment are at cross purposes to ones in Paul-like worlds with slow takeoff and easy alignment - promoting differential progress Vs creating some kind of global policing system to shut down AI research

Comment by Sammy Martin (SDM) on Ideal governance (for companies, countries and more) · 2022-04-05T19:56:27.832Z · LW · GW

One thing to consider, in terms of finding a better way of striking a balance between deferring to experts and having voters invested, is epistocracy. Jason Brennan talks about why, compared to just having a stronger voice for experts in government, epistocracy might be less susceptible to capture by special interests, https://forum.effectivealtruism.org/posts/z3S3ZejbwGe6BFjcz/ama-jason-brennan-author-of-against-democracy-and-creator-of?commentId=EpbGuLgvft5Q9JKxY

Comment by Sammy Martin (SDM) on Why Agent Foundations? An Overly Abstract Explanation · 2022-04-05T12:47:45.371Z · LW · GW

I think this is a good description of what agent foundations is and why it might be needed. But the binary of 'either we get alignment by default or we need to find the True Name' isn't how I think about it.

Rather, there's some unknown parameter, something like 'how sharply does the pressure towards incorrigibility ramp up, what capability level does it start at, how strong is it'?

Setting this at 0 means alignment by default. Setting this higher and higher means we need various kinds of Prosaic alignment strategies which are better at keeping systems corrigible and detecting bad behaviour. And setting it at 'infinity' means we need to find the True Names/foundational insights.

Rohin:

My rough model is that there's an unknown quantity about reality which is roughly "how strong does the oversight process have to be before the trained model does what the oversight process intended for it to do". p(doom) mainly depends on whether the actors training the powerful systems have sufficiently powerful oversight processes.

Maybe one way of getting at this is to look at ELK - if you think the simplest dumbest ELK proposals probably work, that's Alignment by Default. The harder you think prosaic alignment is, the more complex an ELK solution you expect to need. And if you think we need agent foundations, you think we need a worst-case ELK solution.

Comment by Sammy Martin (SDM) on What an actually pessimistic containment strategy looks like · 2022-04-05T12:35:25.048Z · LW · GW

Much of the outreach efforts are towards governments, and some to AI labs, not to the general public.

I think that because of the way crisis governance often works, if you're the designated expert in a position to provide options to a government when something's clearly going wrong, you can get buy in for very drastic actions (see e.g. COVID lockdowns). So the plan is partly to become the designated experts.

I can imagine (not sure if this is true) that even though an 'all of the above' strategy like you suggest seems like on paper it would be the most likely to produce success, you'd get less buy in from government decision-makers and be less trusted by them in a real emergency if you'd previously being causing trouble with grassroots advocacy. So maybe that's why it's not been explored much.

This post by David Manheim does a good job of explaining how to think about governance interventions, depending on different possibilities for how hard alignment turns out to be: https://www.lesswrong.com/posts/xxMYFKLqiBJZRNoPj/

Comment by Sammy Martin (SDM) on AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra · 2022-04-05T12:19:56.491Z · LW · GW

Like I said in my first comment, the in practice difficulty of alignment is obviously connected to timeline and takeoff speed.

But you're right that you're talking about the intrinsic difficulty of alignment Vs takeoff speed in this post, not the in practice difficulty.

But those are also still correlated, for the reasons I gave - mainly that a discontinuity is an essential step in Eleizer style pessimism and fast takeoff views. I'm not sure how close this correlation is.

Do these views come apart in other possible worlds? I.e. could you believe in a discontinuity to a core of general intelligence but still think prosaic alignment can work?

I think that potentially you can - if you think that still enough capabilities in pre-HLMI AI (pre discontinuity) to help you do alignment research before dangerous HLMI shows up. But prosaic alignment seems to require more assumptions to be feasible assuming a discontinuity, like that the discontinuity doesn't occur before all the important capabilities you need to do good alignment research.

Comment by Sammy Martin (SDM) on How Real Moral Mazes (in Bay Area startups)? · 2022-04-05T12:10:39.181Z · LW · GW

From reading your article, it seems like one of the major differences between yours and Zvi's understanding of 'Mazes' is that you're much more inclined to describe the loss of legibility and flexibility as necessary features of big organizations that have to solve complex problems, rather than something that can be turned up or down quite a bit if you have the right 'culture', while not losing size and complexity.

Holden Karnofsky argued for something similar, i.e. that there's a very deep and necessary link between 'buearactatic stagnation'/'mazes' and taking the interests of lots of people into account: https://www.cold-takes.com/empowerment-and-stakeholder-management/

Comment by Sammy Martin (SDM) on Google's new 540 billion parameter language model · 2022-04-05T11:32:54.893Z · LW · GW

So, how does this do as evidence for Paul's model over Eliezer's, or vice versa? As ever, it's a tangled mess and I don't have a clear conclusion.

https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai

On the one hand: this is a little bit of evidence that you can get reasoning and a small world model/something that maybe looks like an inner monologue easily out of 'shallow heuristics', without anything like general intelligence, pointing towards continuous progress and narrow AIs being much more useful. Plus it's a scale up and presumably more expensive than predecessor models (used a lot more TPUs), in a field that's underinvested.

On the other hand, it looks like there's some things we might describe as 'emergent capabilities' showing up, and the paper describes it as discontinous and breakthroughs on certain metrics. So a little bit of evidence for the discontinous model? But does the Eliezer/pessimist model care about performance metrics like BIG-bench tasks or just qualitative capabilities (i.e. the 'breakthrough capabilities' matter but discontinuity on performance metrics don't)?

Comment by Sammy Martin (SDM) on AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra · 2022-04-04T11:15:40.674Z · LW · GW

three possibilities about AI alignment which are orthogonal to takeoff speed and timing

I think "AI Alignment difficulty is orthogonal to takeoff speed/timing" is quite conceptually tricky to think through, but still isn't true. It's conceptually tricky because the real truth about 'alignment difficulty' and takeoff speed, whatever it is, is probably logically or physically necessary: there aren't really alternative outcomes there. But we have a lot of logical uncertainty and conceptual confusion, so it still looks like there are different possibilities. Still, I think they're correlated.

First off, takeoff speed and timing are correlated: if you think HLMI is sooner, you must think progress towards HLMI will be faster, which implies takeoff will also be faster.

The faster we expect takeoff to go, the more likely it is that alignment is also difficult. There are two reasons for this. One is practical: the faster takeoff is, the less time you have to solve the problem before unaligned competitors become a problem. But the second is about the intrinsic difficulty of alignment (which I think is what you're talking about here).

Much of the reason that alignment pessimists like Eliezer think that prosaic alignment can't work, is that they expect that when we reach a capability discontinuity/find the core of general intelligence/enter the regime where AI capabilities start generalizing much further than they were before, whatever we were using to ensure corrigibility will suddenly break on us and probably trigger deceptive alignment immediately with no intermediate phase.

The more gradual and continuous you expect this scaling up to be, the more confident you should be in prosaic alignment, or alignment by default. There are other variables at play, the two aren't in direct correlation, but they aren't orthogonal.

(Also, the whole idea of getting assistance from AI tools on alignment research is in the mix here as well. If there's a big capability discontinuity when we find the core of generality, that causes systems to generalize really far, and also breaks corrigibility, then plausibly but not necessarily, all the capabilities we need to do useful alignment research in time to avoid unaligned AI disasters are on the other side of that discontinuity, creating a chicken-and-egg problem.)

Another way of picking up on this fact is that many of the analogy arguments used for fast takeoff (for example, that human evolution gives us evidence for giant qualitative jumps in capability) also in very similar form are used to argue for difficult alignment (e.g. that when humans started ramping up in intelligence suddenly we also started ignoring the goals of our 'outer optimiser').