Posts

The ignorance of normative realism bot 2022-01-18T05:26:07.676Z
Morality and constrained maximization, part 2 2022-01-12T01:42:45.097Z
Morality and constrained maximization, part 1 2021-12-22T08:47:19.228Z
Reviews of “Is power-seeking AI an existential risk?” 2021-12-16T20:48:26.808Z
Anthropics and the Universal Distribution 2021-11-28T20:35:06.737Z
On the Universal Distribution 2021-10-29T17:50:15.849Z
SIA > SSA, part 4: In defense of the presumptuous philosopher 2021-10-01T05:59:51.534Z
SIA > SSA, part 3: An aside on betting in anthropics 2021-10-01T05:52:02.111Z
SIA > SSA, part 2: Telekinesis, reference classes, and other scandals 2021-10-01T05:49:19.901Z
SIA > SSA, part 1: Learning from the fact that you exist 2021-10-01T05:43:25.247Z
Can you control the past? 2021-08-27T19:39:29.993Z
In search of benevolence (or: what should you get Clippy for Christmas?) 2021-07-20T01:12:12.162Z
On the limits of idealized values 2021-06-22T02:10:50.073Z
Draft report on existential risk from power-seeking AI 2021-04-28T21:41:19.684Z
Problems of evil 2021-04-19T08:06:42.895Z
The innocent gene 2021-04-05T03:31:29.782Z
The importance of how you weigh it 2021-03-29T04:59:41.327Z
On future people, looking back at 21st century longtermism 2021-03-22T08:23:06.743Z
Against neutrality about creating happy lives 2021-03-15T01:55:27.568Z
Care and demandingness 2021-03-08T07:03:42.755Z
Subjectivism and moral authority 2021-03-01T09:02:58.739Z
Two types of deference 2021-02-22T03:32:53.561Z
Contact with reality 2021-02-15T04:53:39.739Z
Killing the ants 2021-02-07T23:17:01.938Z
Believing in things you cannot see 2021-02-01T07:26:54.082Z
On clinging 2021-01-24T23:25:36.412Z
A ghost 2021-01-21T07:14:05.298Z
Actually possible: thoughts on Utopia 2021-01-18T08:27:39.428Z
Alienation and meta-ethics (or: is it possible you should maximize helium?) 2021-01-15T07:07:25.675Z
The impact merge 2021-01-13T07:26:42.605Z
Shouldn't it matter to the victim? 2021-01-11T07:16:28.453Z
Thoughts on personal identity 2021-01-08T04:19:19.637Z
Grokking illusionism 2021-01-06T05:50:57.598Z
The despair of normative realism bot 2021-01-03T23:07:08.767Z
Thoughts on being mortal 2021-01-01T19:17:17.697Z
Wholehearted choices and “morality as taxes” 2020-12-23T02:21:36.392Z

Comments

Comment by Joe Carlsmith (joekc) on Reviews of “Is power-seeking AI an existential risk?” · 2022-01-03T05:58:04.909Z · LW · GW

Reviewers ended up on the list via different routes. A few we solicited specifically because we expected them to have relatively well-developed views that disagree with the report in one direction or another (e.g., more pessimistic, or more optimistic), and we wanted to understand the best objections in this respect. A few came from trying to get information about how generally thoughtful folks with different backgrounds react to the report. A few came from sending a note to GPI saying we were open to GPI folks providing reviews. And a few came via other miscellaneous routes. I’d definitely be interested to see more reviews from mainstream ML researchers, but understanding how ML researchers in particular react to the report wasn’t our priority here.

Comment by Joe Carlsmith (joekc) on Reviews of “Is power-seeking AI an existential risk?” · 2021-12-21T04:44:14.725Z · LW · GW

Cool, these comments helped me get more clarity about where Ben is coming from. 

Ben, I think the conception of planning I’m working with is closest to your “loose” sense. That is, roughly put, I think of planning as happening when (a) something like simulations are happening, and (b) the output is determined (in the right way) at least partly on the basis of those simulations (this definition isn’t ideal, but hopefully it’s close enough for now). Whereas it sounds like you think of (strict) planning as happening when (a) something like simulations are happening, and (c) the agent’s overall policy ends up different (and better) as a result. 

What’s the difference between (b) and (c)? One operationalization could be: if you gave an agent input 1, then let it do its simulations thing and produce an output, then gave it input 1 again, could the agent’s performance improve, on this round, in virtue of the simulation-running that it did on the first round? On my model, this isn’t necessary for planning; whereas on yours, it sounds like it is? 

Let’s say this is indeed a key distinction. If so, let’s call my version “Joe-planning” and your version “Ben-planning.” My main point re: feedforward neural network was that they could do Joe-planning in principle, which it sounds like you think at least conceivable. I agree that it seems tough for shallow feedforward networks to do much of Joe-planning in practice. I also grant that when humans plan, they are generally doing Ben-planning in addition to Joe-planning (e.g., they’re generally in a position to do better on a given problem in virtue of having planned about that same problem yesterday).

Seems like key questions re: the connection to AI X-risk include:

  1. Is there reason to think a given type of planning especially dangerous and/or relevant to the overall argument for AI X-risk?
  2. Should we expect that type of planning to be necessary for various types of task performance?

Re: (1), I do think Ben-planning poses dangers that Joe-planning doesn’t. Notably, Ben planning does indeed allow a system to improve/change its policy "on its own" and without new data, whereas Joe planning need not — and this seems more likely to yield unexpected behavior. This seems continuous, though, with the fact that a Ben-planning agent is learning/improving its capabilities in general, which I flag separately as an important risk factor.

Another answer to (1), suggested by some of your comments, could appeal to the possibility that agents are more dangerous when you can tweak a single simple parameter like “how much time they have to think” or “search depth” and thereby get better performance (this feels related to Eliezer’s worries about “turning up the intelligence dial” by “running it with larger bounds on the for-loops”). I agree that if you can just “turn up the intelligence dial,” that is quite a bit more worrying than if you can’t — but I think this is fairly orthogonal to the Joe-planning vs. Ben-planning distinction. For example, I think you can have Joe-planning agents where you can increase e.g. their search depth by tweaking a single parameter, and you can have Ben-planning agents where the parameters you’d need to tweak aren’t under your control (or the agent’s control), but rather are buried inside some tangled opaque neural network you don't understand.

The central reason I'm interested in Joe-planning, though, is that I think the instrumental convergence argument makes the most sense if Joe-planning is involved -- e.g., if the agent is running simulations that allow it to notice and respond to incentives to seek power (there are versions of the argument that don't appeal to Joe-planning, but I like these less -- see discussion in footnote 87 here). It's true that you can end up power-seeking-ish via non-Joe-planning paths (for example, if in training you developed sphex-ish heuristics that favor power-seeking-ish actions); but when I actually imagine AI systems that end up power-seeking, I imagine it happening because they noticed, in the course of modeling the world in order to achieve their goals, that power-seeking (even in ways humans wouldn't like) would help.

Can this happen without Ben-planning? I think it can. Suppose, for example, that none of your previous Joe-planning models were power-seeking. Then, you train a new Joe-planner, who can run more sophisticated simulations. On some inputs, this Joe-planner realizes that power-seeking is advantageous, and goes for it (or starts deceiving you, or whatever).

Re: (2), for the reasons discussed in section 3.1, I tend to see Joe-planning as pretty key to lots of task-performance — though I acknowledge that my intuitions are surprised by how much it looks like you can do via something more intuitively “sphexish.” And I acknowledge that some of those arguments may apply less to Ben-planning. I do think this is some comfort, since agents that learn via planning are indeed scarier. But I am separately worried that ongoing learning will be very useful/incentivized, too.

Comment by Joe Carlsmith (joekc) on Reviews of “Is power-seeking AI an existential risk?” · 2021-12-21T04:10:26.910Z · LW · GW

I’m glad you think it’s valuable, Ben — and thanks for taking the time to write such a thoughtful and detailed review. 

I’m sympathetic to the possibility that the high level of conjuctiveness here created some amount of downward bias, even if the argument does actually have a highly conjunctive structure.” 

Yes, I am too. I’m thinking about the right way to address this going forward. 

I’ll respond re: planning in the thread with Daniel.

Comment by Joe Carlsmith (joekc) on Reviews of “Is power-seeking AI an existential risk?” · 2021-12-17T19:05:05.658Z · LW · GW

(Note that my numbers re: short-horizon systems + 12 OOMs being enough, and for +12 OOMs in general, changed since an earlier version you read, to 35% and 65% respectively.)

Comment by Joe Carlsmith (joekc) on Reviews of “Is power-seeking AI an existential risk?” · 2021-12-17T06:14:18.613Z · LW · GW

Thanks for these comments.

that suggests that CrystalNights would work, provided we start from something about as smart as a chimp. And arguably OmegaStar would be about as smart as a chimp - it would very likely appear much smarter to people talking with it, at least.

"starting with something as smart as a chimp" seems to me like where a huge amount of the work is being done, and if Omega-star --> Chimp-level intelligence, it seems a lot less likely we'd need to resort to re-running evolution-type stuff. I also don't think "likely to appear smarter than a chimp to people talking with it" is a good test, given that e.g. GPT-3 (2?) would plausibly pass, and chimps can't talk. 

"Do you not have upwards of 75% credence that the GPT scaling trends will continue for the next four OOMs at least? If you don't, that is indeed a big double crux." -- Would want to talk about the trends in question (and the OOMs -- I assume you mean training FLOP OOMs, rather than params?). I do think various benchmarks are looking good, but consider e.g. the recent Gopher paper

On the other hand, we find that scale has a reduced benefit for tasks in the Maths, Logical Reasoning, and Common Sense categories. Smaller models often perform better across these categories than larger models. In the cases that they don’t, larger models often don’t result in a performance increase. Our results suggest that for certain flavours of mathematical or logical reasoning tasks, it is unlikely that scale alone will lead to performance breakthroughs. In some cases Gopher has a lower performance than smaller models– examples of which include Abstract Algebra and Temporal Sequences from BIG-bench, and High School Mathematics from MMLU.

(Though in this particular case, re: math and logical reasoning, there are also other relevant results to consider, e.g. this and this.) 

It seems like "how likely is it that continuation of GPT scaling trends on X-benchmarks would result in APS-systems" is probably a more important crux, though?

Re: your premise 2, I had (wrongly, and too quickly) read this as claiming "if you have X% on +12 OOMs, you should have at least 1/2*X% on +6 OOMs," and log-uniformity was what jumped to mind as what might justify that claim. I have a clearer sense of what you were getting at now, and I accept something in the vicinity if you say 80% on +12 OOMs (will edit accordingly). My +12 number is lower, though, which makes it easier to have a flatter distribution that puts more than half of the +12 OOM credence above +6. 

The difference between 20% and 50% on APS-AI by 2030 seems like it could well be decision-relevant to me (and important, too, if you think that risk is a lot higher in short-timelines worlds). 

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-12-08T01:22:47.334Z · LW · GW

I haven't given a full account of my views of realism anywhere, but briefly, I think that the realism the realists-at-heart want is a robust non-naturalist realism, a la David Enoch, and that this view implies:

  1. an inflationary metaphysics that it just doesn't seem like we have enough evidence for,
  2. an epistemic challenge (why would we expect our normative beliefs to correlate with the non-natural normative facts?) that realists have basically no answer to except "yeah idk but maybe this is a problem for math and philosophy too?" (Enoch's chapter 7 covers this issue; I also briefly point at it in this section, in talking about why the realist bot would expect its desires and intuitions to correlate with the the contents of the envelope buried in the mountain), and
  3. an appeal to a non-natural realm that a lot of realists take as necessary to capture the substance and heft of our normative lives, but which I don't think is necessary for this, at least when it comes to caring (i think moral "authority" and "bindingness regardless of what you care about" might be a different story, but one that "the non-natural realm says so" doesn't obviously help with, either). i wrote up my take on this issue here.

Also, most realists are externalists, and I think that externalist realism severs an intuitive connection between normativity and motivation that I would prefer to preserve (though this is more of an "I don't like that" than a "that's not true" objection). I wrote about this here

There are various ways of being a "naturalist realist," too, but the disagreement between naturalist realism and anti-realism/subjectivism/nihilism is, in my opinion, centrally a semantic one. The important question is whether anything normativity-flavored is in a deep sense something over and above the standard naturalist world picture. Once we've denied that, we're basically just talking about how to use words to describe that standard naturalist world picture. I wrote a bit about how I think of this kind of dialectic here

This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.

There's a common strain of realism in utilitarian circles that tries to identify "goodness" with something like "valence," treats "valence" as a "phenomenal property", and then tries to appeal to our "special direct epistemic access" to phenomenal consciousness in order to solve the epistemic challenge above. i think this doesn't help at all (the basic questions about how the non-natural realm interacts with the natural one remain unanswered -- and this is a classic problem for non-physicalist theories of consciousness as well), but that it gets its appeal centrally via running through people's confusion/mystery relationship with phenomenal consciousness, which muddies the issue enough to make it seem like the move might help. I talk about issues in this vein a bit in the latter half of my podcast with Gus Docker

Re: your list of 6 meta-ethical options, I'd be inclined to pull apart the question of 

  • (a) do any normative facts exists, and if so, which ones, vs.
  • (b) what's the empirical situation with respect to deliberation within agents and disagreement across agents (e.g., do most agents agree and if so why; how sensitive is the deliberation of a given agent to initial conditions, etc).

With respect to (a), my take is closest to 6 ("there aren't any normative facts at all") if the normative facts are construed in a non-naturalist way, and closest to "whatever, it's mostly a terminology dispute at this point" if the normative facts are construed in a naturalist way (though if we're doing the terminology dispute, I'm generally more inclined towards naturalist realism over nihilism). Facts about what's "rational" or "what decision theory wins" fall under this response as well (I talk about this a bit here).

With respect to (b), my first pass take is "i dunno, it's an empirical question," but if I had to guess, I'd guess lots of disagreement between agents across the multiverse, and a fair amount of sensitivity to initial conditions on the part of individual deliberators. 

Re: my ghost, it starts out valuing status as much as i do, but it's in a bit of a funky situation insofar as it can't get normal forms of status for itself because it's beyond society. It can, if it wants, try for some weirder form of cosmic status amongst hypothetical peers ("what they would think if they could see me now!"), or it can try to get status for the Joe that it left behind in the world, but my general feeling is that the process of stepping away from the Joe and looking at the world as a whole tends to reduce its investment in what happens to Joe in particular, e.g.

Perhaps, at the beginning, the ghost is particularly interested in Joe-related aspects of the world. Fairly soon, though, I imagine it paying more and more attention to everything else. For while the ghost retains a deep understanding of Joe, and a certain kind of care towards him, it is viscerally obvious, from the ghost’s perspective, unmoored from Joe’s body, that Joe is just one creature among so many others; Joe’s life, Joe’s concerns, once so central and engrossing, are just one tiny, tiny part of what’s going on.

That said, insofar as the ghost is giving recommendations to me about what to do, it can definitely take into account the fact that I want status to whatever degree, and am otherwise operating in the context of social constraints, coordination mechanisms, etc. 

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-12-02T07:20:51.032Z · LW · GW

In the past, I've thought of idealizing subjectivism as something like an "interim meta-ethics," in the sense that it was a meta-ethic I expected to do OK conditional on each of the three meta-ethical views discussed here, e.g.:

  1. Internalist realism (value is independent of your attitudes, but your idealized attitudes always converge on it)
  2. Externalist realism (value is independent of your attitudes, but your idealized attitudes don't always converge on it)
  3. Idealizing subjectivism (value is determined by your idealized attitudes)

The thought was that on (1), idealizing subjectivism tracks the truth. On (2), maybe you're screwed even post-idealization, but whatever idealization process you were going to do was your best shot at the truth anyway. And on (3), idealizing subjectivism is just true. So, you don't go too far wrong as an idealizing subjectivist. (Though note that we can run similar lines or argument for using internalist or externalist forms of realism as the "interim meta-ethics." The basic dynamic here is just that, regardless of what you think about (1)-(3), doing your idealization procedures is the only thing you know how to do, so you should just do it.)

I still feel some sympathy towards this, but I've also since come to view attempts at meta-ethical agnosticism of this kind as much less innocent and straightforward than this picture hopes. In particular, I feel like I see meta-ethical questions interacting with object-level moral questions, together with other aspects of philosophy, at tons of different levels (see e.g. here, here, and here for a few discussions), so it has felt corresponding important to just be clear about which view is most likely to be true. 

Beyond this, though, for the reasons discussed in this post, I've also become clearer in my skepticism that "just do your idealization procedure" is some well-defined thing that we can just take for granted. And I think that once we double click on it, we actually get something that looks less like any of 1-3, and more like the type of active, existentialist-flavored thing I tried to point at in Sections X and XI

Re: functional roles of morality, one thing I'll flag here is that in my view, the most fundamental meta-ethical questions aren't about morality per se, but rather are about practical normativity more generally (though in practice, many people seem most pushed towards realism by moral questions in particular, perhaps due to the types of "bindingness" intuitions I try to point at here -- intuitions that I don't actually think realism on its own helps with).

Should you think of your idealized self as existing in a context where morality still plays these (and other) functional roles? As with everything about your idealization procedure, on my picture it's ultimately up to you. Personally, I tend to start by thinking about individual ghost versions of myself who can see what things are like in lots of different counterfactual situations (including, e.g., situations where morality plays different functional roles, or in which I am raised differently), but who are in some sense "outside of society," and who therefore aren't doing much in the way of direct signaling, group coordination, etc. That said, these ghost version selves start with my current values, which have indeed resulted from my being raised in environments where morality is playing roles of the kind you mentioned.

Comment by Joe Carlsmith (joekc) on SIA > SSA, part 1: Learning from the fact that you exist · 2021-10-01T09:49:44.256Z · LW · GW

Glad you liked it :). I haven’t spent much time engaging with UDASSA — or with a lot other non-SIA/SSA anthropic theories — at this point, but UDASSA in particular is on my list to understand better. Here I wanted to start with the first-pass basics.

Comment by Joe Carlsmith (joekc) on Can you control the past? · 2021-09-21T17:27:11.787Z · LW · GW

Yes, edited :)

Comment by Joe Carlsmith (joekc) on The Adventure: a new Utopia story · 2021-09-17T23:20:11.352Z · LW · GW

I appreciated this, especially given how challenging this type of exercise can be. Thanks for writing.

Comment by Joe Carlsmith (joekc) on Distinguishing AI takeover scenarios · 2021-09-13T17:53:24.238Z · LW · GW

Rohin is correct. In general, I meant for the report's analysis to apply to basically all of these situations (e.g., both inner and outer-misaligned, both multi-polar and unipolar, both fast take-off and slow take-off), provided that the misaligned AI systems in question ultimately end up power-seeking, and that this power-seeking leads to existential catastrophe. 

It's true, though, that some of my discussion was specifically meant to address the idea that absent a brain-in-a-box-like scenario, we're fine. Hence the interest in e.g. deployment decisions, warning shots, and corrective mechanisms.

Comment by Joe Carlsmith (joekc) on Can you control the past? · 2021-08-28T08:29:41.521Z · LW · GW

Thanks!

Comment by Joe Carlsmith (joekc) on MIRI/OP exchange about decision theory · 2021-08-27T20:36:21.881Z · LW · GW

Mostly personal interest on my part (I was working on a blog post on the topic, now up), though I do think that the topic has broader relevance.

Comment by Joe Carlsmith (joekc) on Thoughts on being mortal · 2021-08-05T07:59:57.304Z · LW · GW

I think this could've been clearer: it's been a bit since I wrote this/read the book, but I don't think I meant to imply that "some forms of hospice do prolong life at extreme costs to its quality" (though the sentence does read that way); more that some forms of medical treatment prolong life at extreme cost to its quality, and Gawande discusses hospice as an alternative.

Comment by Joe Carlsmith (joekc) on Actually possible: thoughts on Utopia · 2021-07-31T01:55:31.884Z · LW · GW

Glad to hear it :)

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-06-24T07:09:20.732Z · LW · GW

I agree that there are other meta-ethical options, including ones that focus more on groups, cultures, agents in general, and so on, rather than individual agents (an earlier draft had a brief reference to this). And I think it's possible that some of these are in a better position to make sense of certain morality-related things, especially obligation-flavored ones, than the individually-focused subjectivism considered here (I gesture a little at something in this vicinity at the end of this post). I wanted a narrower focus in this post, though.

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-06-24T07:00:25.706Z · LW · GW

Thanks :). I didn't mean for the ghost section to imply that the ghost civilization solves the problems discussed in the rest of the post re: e.g. divergence, meta-divergence, and so forth. Rather, the point was that taking responsibility for making the decision yourself (this feels closely related to "making peace with your own agency"), in consultation with/deference towards whatever ghost civilizations etc you want, changes the picture relative to e.g. requiring that there be some particular set of ghosts that already defines the right answer.

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-06-24T06:51:32.295Z · LW · GW

Glad you liked it, and thanks for sharing the Bakker piece -- I found it evocative.

Comment by Joe Carlsmith (joekc) on On the limits of idealized values · 2021-06-24T06:49:24.850Z · LW · GW

I agree that it's a useful heuristic, and the "baby steps" idealization you describe seems to me like a reasonable version to have in mind and to defer to over ourselves (including re: how to continue idealizing). I also appreciate that your 2012 post actually went through sketched a process in that amount of depth/specificity.

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-07T17:59:03.472Z · LW · GW

Hi Koen, 

Glad to hear you liked section 4.3.3. And thanks for pointing to these posts -- I certainly haven't reviewed all the literature, here, so there may well be reasons for optimism that aren't sufficiently salient to me.

Re: black boxes, I do think that black-box systems that emerge from some kind of evolution/search process are more dangerous; but as I discuss in 4.4.1, I also think that the bare fact that the systems are much more cognitively sophisticated than humans creates significant and safety-relevant barriers to understanding, even if the system has been designed/mechanistically understood at a different level.

Re: “there is a whole body of work which shows that evolved systems are often power-seeking” -- anything in particular you have in mind here?

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-07T17:52:24.058Z · LW · GW

Hi Daniel, 

Thanks for taking the time to clarify. 

One other factor for me, beyond those you quote, is the “absolute” difficulty of ensuring practical PS-alignment, e.g. (from my discussion of premise 3):

Part of this uncertainty has to do with the “absolute” difficulty of achieving practical PS-alignment, granted that you can build APS systems at all. A system’s practical PS-alignment depends on the specific interaction between a number of variables -- notably, its capabilities (which could themselves be controlled/limited in various ways), its objectives (including the time horizon of the objectives in question), and the circumstances it will in fact exposed to (circumstances that could involve various physical constraints, monitoring mechanisms, and incentives, bolstered in power by difficult-to-anticipate future technology, including AI technology). I expect problems with proxies and search to make controlling objectives harder; and I expect barriers to understanding (along with adversarial dynamics, if they arise pre-deployment) to exacerbate difficulties more generally; but even so, it also seems possible to me that it won’t be “that hard” (by the time we can build APS systems at all) to eliminate many tendencies towards misaligned power-seeking (for example, it seems plausible to me that selecting very strongly against (observable) misaligned power-seeking during training goes a long way), conditional on retaining realistic levels of control over a system’s post-deployment capabilities and circumstances (though how often one can retain this control is a further question).

My sense is that relative to you, I am (a) less convinced that ensuring practical PS-alignment will be “hard” in this absolute sense, once you can build APS systems at all (my sense is that our conceptions of what it takes to “solve the alignment problem” might be different), (b) less convinced that practically PS-misaligned systems will be attractive to deploy despite their PS-misalignment (whether because of deception, or for other reasons), (c) less convinced that APS systems becoming possible/incentivized by 2035 implies “fast take-off” (it sounds like you’re partly thinking: those are worlds where something like the scaling hypothesis holds, and so you can just keep scaling up; but I don’t think the scaling hypothesis holding to an extent that makes some APS systems possible/financially feasible implies that you can just scale up quickly to systems that can perform at strongly superhuman levels on e.g. ~any task, whatever the time horizons, data requirements, etc), and (d) more optimistic about something-like-present-day-humanity’s ability to avoid/prevent failures at a scale that disempower ~all of humanity (though I do think Covid, and its policitization, an instructive example in this respect), especially given warning shots (and my guess is that we do get warning shots both before or after 2035, even if APS systems become possible/financially feasible before then).

Re: nuclear winter, as I understand it, you’re reading me as saying: “in general, if a possible and incentivized technology is dangerous, there will be warning shots of the dangers; humans (perhaps reacting to those warning shots) won’t deploy at a level that risks the permanent extinction/disempowerment of ~all humans; and if they start to move towards such disempowerment/extinction, they’ll take active steps to pull back.” And your argument is: “if you get to less than 10% doom on this basis, you’re going to give too low probabilities on scenarios like nuclear winter in the 20th century.” 

I don’t think of myself as leaning heavily on an argument at that level of generality (though maybe there’s a bit of that). For example, that statement feels like it’s missing the “maybe ensuring practical PS-alignment just isn’t that hard, especially relative to building practically PS-misaligned systems that are at least superficially attractive to deploy” element of my own picture. And more generally, I expect to say different things about e.g. biorisk, climate change, nanotech, etc, depending on the specifics, even if generic considerations like “humans will try not to all die” apply to each.

Re: nuclear winter in particular, I’d want to think a bit more about what sort of probability I’d put on nuclear winter in the 20th century (one thing your own analysis skips is the probability that a large nuclear conflict injects enough smoke into the stratosphere to actually cause nuclear winter, which I don’t see as guaranteed -- and we’d need to specify what level of cooling counts). And nuclear winter on its own doesn’t include a “scaling to the permanent disempowerment/extinction of ~all of humanity” step -- a step that, FWIW, I see as highly questionable in the nuclear winter case, and which is important to my own probability on AI doom (see premise 5). And there are various other salient differences: for example, mutually assured destruction seems like a distinctly dangerous type of dynamic, which doesn’t apply to various AI deployment scenarios; nuclear weapons have widespread destruction as their explicit function, whereas most AI systems won’t; and so on. That said, I think comparisons in this vein could still be helpful; and I’m sympathetic to points in the vein of “looking at the history of e.g. climate, nuclear risk, BSL-4 accidents, etc the probability that humans will deploy technology that risks global catastrophe, and not stop doing so even after getting significant evidence about the risks at stake, can’t be that low” (I talk about this a bit in 4.4.3 and 6.2).

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-01T00:58:04.050Z · LW · GW

Thanks for reading, and for your comments on the doc. I replied to specific comments there, but at a high level: the formal work you’ve been doing on this does seem helpful and relevant (thanks for doing it!). And other convergent phenomena seem like helpful analogs to have in mind.

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-01T00:35:51.756Z · LW · GW

Glad to hear it, Steven. Thanks for reading, and for taking the time to write up your own threat model.

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-01T00:34:14.031Z · LW · GW

Thanks, this seems like a salient type of consideration, and one that isn’t captured very explicitly in the current list (though I think it may play a role in explaining the bullet point about humans with general skill-sets being in-demand).

Comment by Joe Carlsmith (joekc) on Draft report on existential risk from power-seeking AI · 2021-05-01T00:33:16.058Z · LW · GW

Hi Daniel, 

Thanks for reading. I think estimating p(doom) by different dates (and in different take-off scenarios) can be a helpful consistency check, but I disagree with your particular “sanity check” here -- and in particular, premise (2). That is, I don’t think that conditional on APS-systems becoming possible/financially feasible by 2035, it’s clear that we should have at least 50% on doom (perhaps some of disagreement here is about what it takes for the problem to be "real," and to get "solved"?). Nor do I see 10% on “Conditional it being both possible and strongly incentivized to build APS systems, APS systems will end up disempowering approximately all of humanity” as obviously overconfident (though I do take some objections in this vein seriously). I’m not sure exactly what “10% on nuclear war” analog argument you have in mind: would you be able to sketch it out, even if hazily?

Comment by Joe Carlsmith (joekc) on Clarifying inner alignment terminology · 2021-02-19T21:33:00.566Z · LW · GW

Cool (though FWIW, if you're going to lean on the notion of policies being aligned with humans, I'd be inclined to define that as well, in addition to defining what it is for agents to be aligned with humans. But maybe the implied definition is clear enough: I'm assuming you have in mind something like "a policy is aligned with humans if an agent implementing that policy is aligned with humans."). 

Regardless, sounds like your definition is pretty similar to: "An agent is intent aligned if its behavioral objective is such that an arbitrarily powerful and competent agent pursuing this objective to arbitrary extremes wouldn't act in ways that humans judge bad"? If you see it as importantly different from this, I'd be curious.

Comment by Joe Carlsmith (joekc) on Clarifying inner alignment terminology · 2021-02-19T18:43:57.003Z · LW · GW

Aren't they now defined in terms of each other? 

"Intent alignment: An agent is intent aligned if its behavioral objective is outer aligned.

Outer alignment: An objective function  is outer aligned if all models that perform optimally on  in the limit of perfect training and infinite data are intent aligned."

Comment by Joe Carlsmith (joekc) on Clarifying inner alignment terminology · 2021-02-19T07:48:27.938Z · LW · GW

Thanks for writing this up. Quick question re: "Intent alignment: An agent is intent aligned if its behavioral objective is aligned with humans." What does it mean for an objective to be aligned with humans, on your view? You define what it is for an agent to be aligned with humans, e.g.: "An agent is aligned (with humans) if it doesn't take actions that we would judge to be bad/problematic/dangerous/catastrophic." But you don't say explicitly what it is for an objective to be aligned: I'm curious if you have a preferred formulation.

Is it something like: “the behavioral objective is such that, when the agent does ‘well’ on this objective, the agent doesn’t act in a way we would view as bad/problematic/dangerous/catastrophic." If so, it seems like a lot might depend on exactly how “well” the agent does, and what opportunities it has in a given context. That is, an “aligned” agent might not stay aligned if it becomes more powerful, but continues optimizing for the same objective (for example, a weak robot optimizing for beating me at chess might be "aligned" because it only focuses on making good chess moves, but a stronger one might not be, because it figures out how to drug my tea). Is that an implication you’d endorse? 

Or is the thought something like: "the behavioral objective such that, no matter how powerfully the agent optimizes for it, and no matter its opportunities for action, it doesn't take actions we would view as bad/problematic/dangerous/catastrophic"? My sense is that something like this is often the idea people have in mind, especially in the context of anticipating things like intelligence explosions. If this is what you have in mind, though, maybe worth saying so explicitly, since intent alignment in this sense seems like a different constraint than intent alignment in the sense of e.g. "the agent's pursuit of its behavioral objective does not in fact give rise to bad actions, given the abilities/contexts/constraints that will in fact be relevant to its behavior."

Comment by Joe Carlsmith (joekc) on On clinging · 2021-01-28T08:47:43.580Z · LW · GW

Interesting; I hadn't really considered that angle. Seems like this could also apply to other mental phenomena that might seem self-recommending (pleasure? rationality?), but which plausibly have other, more generally adaptive functions as well, so I would continue to wonder about other functions regardless.

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-26T07:41:43.626Z · LW · GW

I meant mental states in something more like the #1 sense -- and so, I think, does Frankish.

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-26T07:27:11.934Z · LW · GW

My sense is that the possibility of dynamics of this kind would be on people's radar in the philosophy community, at least.

Comment by Joe Carlsmith (joekc) on On clinging · 2021-01-25T07:42:56.433Z · LW · GW

Thanks :). I do think clinging often functions as an unnoticed lens on the world; though noticing it, in my experience, is also quite distinct from it "releasing." I also would've thought that depression can be an unnoticed (or at least, unquestioned) lens as well: e.g., a depressed person who is convinced that everything in the world is bad, that they'll never feel better again, etc.

Comment by Joe Carlsmith (joekc) on The impact merge · 2021-01-14T07:18:03.069Z · LW · GW

Glad to hear you found it useful.

Comment by Joe Carlsmith (joekc) on The impact merge · 2021-01-14T07:04:54.870Z · LW · GW

Thanks :) Re blog name: it isn't: "Hands" comes from a Martin Buber quote, and "Cities" from a phrase I believe I heard from A.J. Julius. I chose them partly as a personal reminder about the blog's aims.

Comment by Joe Carlsmith (joekc) on The impact merge · 2021-01-14T06:51:06.886Z · LW · GW

That's the one :)

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-10T05:59:20.598Z · LW · GW

I do remember that conversation, though I'm a bit hazy on the details of the argument you presented. Let me know if there's a write-up/summary somewhere, or if you create one in future. 

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-10T05:57:11.688Z · LW · GW

Thanks for explaining where you're coming from. 

Yet I experience that computation as the qualia of "blueness." How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that "I am seeing blue." I must not understand what is being claimed, because I agree with it and yet it doesn't remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion.

As I understand it, the idea would be that, as weird as it may sound, there isn't any phenomenology to it. Rather: according to the story that your brain is telling, there is some phenomenology to it. But there isn't. That is, your brain's story doesn't create, lead to, or correlate with phenomenal blueness; rather, phenomenal blueness is something that the story describes, but which doesn't exist, in the same way that a story can describe unicorns without bringing them to life. 

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-07T08:35:07.392Z · LW · GW

I’m hopeful that if we actually had a worked out reductionist account of all the problematic intuitions, which we knew was right and which made illusionism true, then this would be at least somewhat helpful in making illusionism less mysterious. In particular, I’m hopeful that thoroughly and dutifully reconceptualizing our introspection and intuitions according to that theory — “when it seems to me like X, what’s going on is [insert actual gears level explanation, not just ‘neurons are firing’ or ‘my brain is representing its internal processing in a simplified and false way’]” — would make a difference.

Comment by Joe Carlsmith (joekc) on Grokking illusionism · 2021-01-07T08:25:28.860Z · LW · GW

Glad you found it helpful (or at least, as helpful as other work on the topic). So far in my engagement with Graziano (specifically, non-careful reads of his 2013 book and his 2019 “Toward a standard model of consciousness”), I don’t feel like I’ve taken away much more than the summary I gave above of Frankish’s view: namely, “introspective mechanisms ... track the processes involved in access consciousness and represent them using a simplified model” — something pretty similar to what Chalmers also says here on p. 34. I know Graziano focuses on attention in particular, and he talks more about e.g. sociality and cites some empirical work, but at a shallow glance I’m not sure I yet see really substantive and empirically grounded increases in specificity, beyond what seems like the general line amongst a variety of folks that “there’s some kind of global workspace-y thing, there’s some kind of modeling of that, this modeling involves simplifications/distortions/opacity of various kinds, these somehow explain whatever problem intuitions/reports need explaining." But I haven’t tried to look at Graziano closely. The “naive” vs. “sophisticated” descriptions in your blog post seem like a helpful way to frame his project.