Who are some prominent reasonable people who are confident that AI won't kill everyone?

post by Optimization Process · 2022-12-05T09:12:41.797Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    14 Algon
    14 Quintin Pope
    9 LawrenceC
    6 Zolmeister
    5 Daniel Kokotajlo
    4 Lao Mein
    3 nz
    3 Adam Shai
    2 Leonieee
    2 teradimich
    1 Matt Goldenberg
    1 Stuckwork
    1 osten
    0 Arthur Conmy
None
2 comments

Bounty [closed]: $30 for each link that leads to me reading/hearing ~500 words from a Respectable Person arguing, roughly, "accelerating AI capabilities isn't bad," and me subsequently thinking "yeah, that seemed pretty reasonable." For example, linking me to nostalgebraist [LW · GW] or OpenAI's alignment agenda or this debate [LW · GW].[1] Total bounty capped at $600, first come first served. All bounties (incl. the total-bounty cap) doubled if, by Jan 1, I can consistently read people expressing unconcern about AI and not notice a status-yuck reaction.

Context: I notice that I've internalized a message like "thinking that AI has a <1% chance of killing everyone is stupid and low-status." Because I am a monkey, this damages my ability to consider the possibility that AI has a <1% chance of killing everyone, which is a bummer, because my beliefs on that topic affect things like whether I continue to work at my job accelerating AI capabilities.[2]

I would like to be able to consider that possibility rationally, and that requires neutralizing my status-yuck reaction. One promising-seeming approach is to spend a lot of time looking at lots of of high-status monkeys who believe it!

  1. ^

    Bounty excludes things I've already seen, and things I would have found myself based on previous recommendations for which I paid bounties (for example, other posts by the same author on the same web site). 

  2. ^

    Lest ye worry that [providing links to good arguments] will lead to [me happily burying my head in the sand and continuing to hasten the apocalypse] -- a lack of links to good arguments would move much more of my probability-mass to "Less Wrong is an echo chamber" than to "there are basically no reasonable people who think advancing AI capabilities is good."

Answers

answer by Algon · 2022-12-05T12:48:51.502Z · LW(p) · GW(p)

Hanson is the most obvious answer, to me.

EDIT: Note, I don't think these people have given explicit probabilities. But they seem much less worried than people from the AI alignment community.
EDIT^2: Also, only the links to Hanson and Jacob's stuff have comparable detail to what you requested. 
 

Bryan Caplan is one. Tyler Cowen too, if you take his claims of nuclear war being a much greater large-scale risk by far seriously and assign standard numbers for that. I think David Friedman might agree, though I'll get back to you on that. Geoffery Hinton seems more worried about autonomous machines than AI taking over. He thinks deep learning will be enough, but quite a few more conceptual breakthroughs on the order of transformers will be needed. 

Maybe Jacob Cannell [LW · GW]? He seems quite optimistic [LW · GW]that alignment is on track to be solved. Though I doubt his P(doom) is less than 1%.

 

comment by Douglas_Knight · 2022-12-06T18:24:19.852Z · LW(p) · GW(p)

Strong disagree. Hanson believes that there's more than a 1% chance of AI destroying all value. 

Even if he didn't see an inside view argument, he makes an outside view argument about the Great Filter.

He probably believes that there's a much larger chance of it killing everyone, and his important disagreement with Yudkowsky is that thinks that it will have value in itself, rather than be a paperclip maximizer. In particular, in the Em scenario, he argues that property rights will keep humans alive for 2 years. Maybe you should read that as <1% of all humans being killed in that first phase, but at some point the Ems evolve into something truly alien and he stops predicting that they don't kill everyone. But that's OK because he values the descendants. 

Replies from: MakoYass, Algon
comment by mako yass (MakoYass) · 2022-12-25T22:43:53.833Z · LW(p) · GW(p)

Also note that iirc he only assigns about 10% to the EM scenario happening in general? At least, as of the writing of the book. I get the impression he just thinks about it a lot because it is the scenario that he, a human economist, can think about.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2023-01-04T19:42:23.385Z · LW(p) · GW(p)

I have not read the book, but my memory is that in a blog post he said that the probability is "at least" 10%. I think he holds a much higher number, but doesn't want to speak about it and just wants to insist that his hostile reader should accept at least 10%. In particular, if people say "no it won't happen, 10%," then that's not a rebuttal at all. But maybe I'm confusing that with other numbers, eg, here where he says that it's worth talking about even if it is only 1%.

Here he reports old numbers and new:

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide,

I think that means he previously put 15% on ems in general and 5% on his em scenario (ie, you were right).

80% on the specific scenario leaves little room for AI, let alone AI destroying all value. So maybe he now puts that <1%. But maybe he has just removed non-em non-AI scenarios. In particular, you have to put a lot of weight on completely unanticipated scenarios; perhaps that has gone from 80% to 10%.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-01-06T07:52:13.201Z · LW(p) · GW(p)

I'd expect his "useful guide" claim to be compatible with worlds that're entirely AGIs? He seems to think they'll be subject to the same sorts of dynamics as humans, coordination problems and all that. I'm not convinced, but he seems quite confident.

(personally I think some coordination problems and legibility issues will always persist, but they'd be relatively unimportant, and focusing on them wont tell us much about the overall shape of AGI societies.)

comment by Algon · 2022-12-06T18:46:16.697Z · LW(p) · GW(p)

OK, fair. I didn't actually read the post in detail. There's a good chance Hanson assigns >1% chance of AI killing everyone, if you don't include EMs. But two points
 

 

1) Hanson's view of EMs results in vast numbers of very human like minds continuing to exist for a long subjective period of time. That's not really an x-risk, though Hanson does think it plausible that biological humans may suffer greatly in the transition. He doesn't give a detailed picture of what happens after, besides some stuff like colonozing the sun etc. Yet, there could still be humans hanging around in the Age of EM. To me, Age of EM paints a picture that makes OP's question seem kind of poorly phrased. Like, if someone believed solving alignment would result in all humans being uploaded, then gradually becoming transhuman entities, would that qualify as a >1% chance of human extinction? I think most here would say no.

2) Working on capabilities doesn't seem to be nearly as big an issue in a Hansonian worldview as it would be in e.g. Yudkowsky's, or even Christiano's. So I feel like pointing out Hanson would still be worthwhile, especially as he's a person who engaged heavily with the early AI alignment people. 

Replies from: Douglas_Knight
comment by Douglas_Knight · 2022-12-06T20:10:42.038Z · LW(p) · GW(p)

I claim that Hanson has >1% chance of Yudkowsky's scenario that AI comes first and destroys all value and also a >1% chance that Ems come first and a scenario that a lot of people would say killed all people, including the Ems. This is not directly relevant to the question about AI, but it suggests that he is sanguine about analogous AI scenarios, soft takeoff scenarios not covered by Yudkowsky.

Yes, during the 2 years of wallclock time, the Ems exist for 1000 subjective years. Is that so long? This is not "longtermism." Yes, you should probably count the Ems as humans, so if they kill all the biological humans, they don't "kill everyone," but after this period they are outcompeted by something more alien. Does this count as killing everyone?

Working on capabilities isn't a problem in his mainline, but the question was not about mainline, but about tail events. If Ems are going to come first, then you could punt alignment to their millennium of work. But if it's not guaranteed who comes first and AI is worse than Ems, working on AI could cause it to come first. Or maybe not. Maybe one is so much easier than the other and nothing is decision relevant.

Yes, Hanson sees value drift as inevitable. The Ems will be outcompeted by something better adapted that we should see some value in. He thinks it's parochial to dislike the Ems evolving under Malthusian pressures. Maybe, but it's important not to confuse the factual questions with the moral questions. "It's OK because there's no risk of X" is different from "X is OK, actually." Yes, he talks about the Dreamtime. Part of that is the delusion that we can steer the future more than Malthusian forces. But part of it is that because we are not yet under strict competition, we have excess resources that we can use to steer the future, if only a little.

Replies from: Algon
comment by Algon · 2022-12-06T21:43:26.716Z · LW(p) · GW(p)

I think this is a good summary of Hanson's views, and your answer is correct as pertains to the question that was actually asked. That said, I think reading Hanson counts as a skeptic for the need for more AI-safety researchers on the margin. And, I think he'd be skeptical of the marginal person claiming a large impact via working on AI capabilities relative to most counterfactuals. I am not sure if we disagree there, but I'm going to tap out anyway. 

comment by Eli Tyre (elityre) · 2022-12-05T14:49:08.160Z · LW(p) · GW(p)

Here, for instance.

Replies from: Algon
comment by Algon · 2022-12-05T16:00:26.856Z · LW(p) · GW(p)

I think this is his latest comment, but it is on FOOM. Hanson's opinion is that, on the margin, the current amount of people working on AI safety seems adequate. Why? Because there's not much useful work you can do without access to advanced AI, and he thinks the latter is a long time in coming. Again, why? Hanson thinks that FOOM is the main reason to worry about AI risk. He prefers an outside view to predict technologies which we have little empirical information on and so believes FOOM is unlikely because he thinks progress historically doesn't come in huge chunks but gradually. You might question the speed of progress, if not its lumpiness, as deep learning seems to pump out advance after advance. Hanson argues that people are estimating progress poorly and talk of deep learning is over-blown

What would it take to get Hanson to sit up and pay more attention to AI? AI self-monologue used to guide and improves its ability to perform useful tasks. 

One thing I didn't manage to fit in here is that I feel like another crux for Hanson would be how the brain works. If the brain tackles most useful tasks using a simple learning algorithm, like Steve Byrnes argues, instead of a grab bag of specialized modules with distinct algorithms for each of them, then I think that would be a big update. But that is mostly my impression, and I can't find the sources I used to generate it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2022-12-06T18:11:33.141Z · LW(p) · GW(p)

I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario.

That sounds like a lot more than 1% chance.

Replies from: Algon
comment by Algon · 2022-12-06T18:23:58.815Z · LW(p) · GW(p)

Yeah, I think he assigns ~5% chance to FOOM, if I had to make a tenative guess. 10% seems too high to me. In general, my first impression as to Hanson's credences on a topic won't be accurate unless I really scrutinize his claims. So its not weird to me that someone might wind up thinking Hanson believes there's a <1% of AI x-risks. 

Replies from: Douglas_Knight
comment by Douglas_Knight · 2022-12-06T18:32:37.529Z · LW(p) · GW(p)

Do you mean hard take off, or Yudkowsky's worry that foom causes rapid value drift and destroys all value? I think Hanson puts maybe 5% on that and a much larger number on hard take off, 10 or 20%.

Replies from: Algon
comment by Algon · 2022-12-06T18:49:39.286Z · LW(p) · GW(p)

Really? My impression was the opposite. He's said stuff to the effect of "there's nothing you can do to prevent value drift", and seems to think that whether we create EMs or not, our successors will hold values quite different to our own. See all the stuff about the current era  being a dreamtime, on the values of grabby aliens etc. 

answer by Quintin Pope · 2022-12-05T10:08:06.711Z · LW(p) · GW(p)

If you're willing to relax the "prominent" part of "prominent reasonable people", I'd suggest myself. I think our odds of doom are < 5%, and I think that pretty much all the standard arguments for doom are wrong. I've written in specific about why I think the "evolution failed to align humans to inclusive genetic fitness" argument for doom via inner misalignment is wrong here: Evolution is a bad analogy for AGI: inner alignment [LW · GW].

I'm also a co-author on the The Shard Theory of Human Values [? · GW] sequence, which takes a more optimistic perspective than many other alignment-related memetic clusters, and disagrees with lots of past alignment thinking. Though last I checked, I was one of the most optimistic of the Shard theory authors, with Nora Belrose as a possible exception.

comment by nz · 2022-12-06T09:48:59.966Z · LW(p) · GW(p)

+1 for Quintin. I would also suggest this comment here [LW(p) · GW(p)].

Replies from: Optimization Process
comment by Optimization Process · 2022-12-25T23:38:45.463Z · LW(p) · GW(p)

I paid a bounty for the Shard Theory link, but this particular comment... doesn't do it for me. It's not that I think it's ill-reasoned, but it doesn't trigger my "well-reasoned argument" sensor -- it's too... speculative? Something about it just misses me, in a way that I'm having trouble identifying. Sorry!

comment by Optimization Process · 2022-12-25T23:34:35.314Z · LW(p) · GW(p)

Yeah, I'll pay a bounty for that!

answer by LawrenceC · 2022-12-06T00:01:39.437Z · LW(p) · GW(p)

Jan Leike of OpenAI:

https://aligned.substack.com/p/alignment-optimism?publication_id=328633

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-12-06T05:27:17.560Z · LW(p) · GW(p)

I'm not sure Jan would endorse "accelerating capabilities isn't bad." Also I doubt Jan is confident AI won't kill everyone. I can't speak for him of course, maybe he'll show up & clarify.

Replies from: Optimization Process, LawChan
comment by Optimization Process · 2022-12-06T06:26:05.892Z · LW(p) · GW(p)

Hmm! Yeah, I guess this doesn't match the letter of the specification. I'm going to pay out anyway, though, because it matches the "high-status monkey" and "well-reasoned" criteria so well and it at least has the right vibes, which are, regrettably, kind of what I'm after.

comment by LawrenceC (LawChan) · 2022-12-06T05:44:55.209Z · LW(p) · GW(p)

Ah, my bad then.

comment by Optimization Process · 2022-12-06T06:20:35.774Z · LW(p) · GW(p)

Nice. I haven't read all of this yet, but I'll pay out based on the first 1.5 sections alone.

answer by Zolmeister · 2022-12-05T15:46:23.535Z · LW(p) · GW(p)

John Carmack

  • 55-60% chance there will be "signs of life" in 2030 (4:06:20)
  • "When we've got our learning disabled toddler, we should really start talking about the safety and ethics issues, but probably not before then" (4:35:36)
  • These things will take thousands of GPUs, and will be data-center bound
    • "The fast takeoff ones are clearly nonsense because you just can't open TCP connections above a certain rate" (4:36:40)

Broadly, he predicts AGI to be animalistic ("learning disabled toddler"), rather than a consequentialist laser beam [LW · GW], or simulator [LW · GW].

comment by Optimization Process · 2022-12-06T06:12:57.901Z · LW(p) · GW(p)

Approved! Will pay bounty.

answer by Daniel Kokotajlo · 2022-12-06T05:32:58.554Z · LW(p) · GW(p)

Hmmm...

Ben Garfinkel?  https://www.effectivealtruism.org/articles/ea-global-2018-how-sure-are-we-about-this-ai-stuff
Katja Grace? https://worldspiritsockpuppet.com/2022/10/14/ai_counterargs.html
Scott Aaronson? https://www.lesswrong.com/posts/Zqk4FFif93gvquAnY/scott-aaronson-on-reform-ai-alignment

I don't know if any of these people would be confident AI won't kill everyone, but they definitely seem to be smart/reasonable and disagreeing with the standard LW views.

comment by Elias Schmied (EliasSchmied) · 2022-12-07T18:16:02.381Z · LW(p) · GW(p)

Katja Grace's p(doom) is 8% IIRC

comment by Optimization Process · 2022-12-26T03:02:03.550Z · LW(p) · GW(p)

Thanks for the links!

  • Ben Garfinkel: sure, I'll pay out for this!
  • Katja Grace: good stuff, but previously claimed [LW(p) · GW(p)] by Lao Mein.
  • Scott Aaronson: I read this as a statement of conclusions, rather than an argument.
answer by Lao Mein · 2022-12-05T10:00:42.532Z · LW(p) · GW(p)

https://www.rudikershaw.com/articles/ai-doom-isnt-coming

https://idlewords.com/talks/superintelligence.htm

https://qntm.org/ai

https://kk.org/thetechnium/the-myth-of-a-superhuman-ai/

https://arxiv.org/abs/1702.08495v1

jetpress.org/v26.1/agar.htm

https://curi.us/blog/post/1336-the-only-thing-that-might-create-unfriendly-ai

https://www.popsci.com/robot-uprising-enlightenment-now

 

This one's tongue-in-cheek:

https://arxiv.org/abs/1703.10987

 

Update 1:

https://www.theregister.com/2015/03/19/andrew_ng_baidu_ai/

Update 2:

Katja Grace gives quite good counterarguments about AI risk.

https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case [LW · GW]

comment by Optimization Process · 2022-12-07T08:39:47.317Z · LW(p) · GW(p)

Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective "I subsequently think 'yeah, that seemed well-reasoned'" criterion.

It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I'll pay triple.)

(Re-reading this, I notice that my "reasons things didn't seem well-reasoned" tend to look like counterarguments, which isn't always the core of it -- it is sometimes, sadly, vibes-based. And, of course, I don't think that if I have a counterargument then something isn't well-reasoned -- the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it's hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.)

 

 

In the order I read them:

https://qntm.org/ai

No bounty: I didn't wind up thinking this was well-reasoned.

It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it's probably technically correct that no AI can do anything I couldn't do given enough time, time is really important, as your next link points out!

https://kk.org/thetechnium/the-myth-of-a-superhuman-ai/

No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: "'smarter than humans' is a meaningless concept" -- so is 'smarter than a smallpox virus,' but look what happened there; "Dimensions of intelligence are not infinite ... Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?" -- compare me to John von Neumann! I am not near the maximum.)

https://arxiv.org/abs/1702.08495v1

No bounty! (Reasoning: the core argument seems to be on page 4: paraphrasing, "here are four ways an AI could become smarter; here's why each of those is hard." But two of those arguments are about "in the limit" with no argument we're near that limit, and one argument is just "we would need to model the environment," not actually a proof of difficulty. The ensuing claim that getting better at prediction is "prohibitively high" seems deeply unjustified to me.)

https://www.rudikershaw.com/articles/ai-doom-isnt-coming

No bounty! (Reasoning: the core argument seems to be that (a) there will be problems too hard for AI to solve (e.g. traveling-salesman). (Then there's a rebuttal to a specific Moore's-Law-focused argument.) But the existence of arbitrarily hard problems doesn't distinguish between plankton, lizards, humans, or superintelligent FOOMy AIs; therefore (unless more work is done to make it distinguish) it clearly can't rule out any of those possibilities without ruling out all of them.)

 

 

(It's costly for me to identify my problems with these and to write clear concise summaries of my issues. Given that we're 0 for 4 at this point, I'm going to skim the remainder more casually, on the prior that what tickles your sense of well-reasoned-ness doesn't tickle mine.)

 

 

https://idlewords.com/talks/superintelligence.htm

No bounty! (Reasoning: "Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation." Again, compare me to von Neumann! Compare von Neumann to a von Neumann who can copy himself, save/load snapshots, and tinker with his own mental architecture! "Complex minds are likely to have complex motivations" -- but instrumental convergence: step 1 of any plan is to take over the world if you think you can. I know I would.)

https://curi.us/blog/post/1336-the-only-thing-that-might-create-unfriendly-ai

No bounty! (Reasoning: has an alien-to-me model where AI safety is about hardcoding ethics into AIs.)

https://www.popsci.com/robot-uprising-enlightenment-now

No bounty! (Reasoning: "Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?" As above, step 1 is to take over the world. Also makes the "intelligence is multidimensional" / "intelligence can't be infinite" points, which I describe above why they feel so unsatisfying.)

https://www.theregister.com/2015/03/19/andrew_ng_baidu_ai/

No bounty! Too short, and I can't dig up the primary source.

https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case [LW · GW]

Bounty! I haven't read it all yet, but I'm willing to pay out based on what I've read, and on my favorable priors around Katja Grace's stuff.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2022-12-07T09:07:55.414Z · LW(p) · GW(p)

Thanks, I knew I was outmatched in terms of specialist knowledge, so I just used Metaphor to pull as many matching articles that sounded somewhat reasonable as possible before anyone else did. Kinda ironic the bounty was awarded for the one I actually went and found by hand. My median EV was $0, so this was a pleasant surprise.

answer by nz · 2022-12-06T09:35:04.517Z · LW(p) · GW(p)

When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala [AF · GW] and Eric Drexler with his QNR [AF · GW] and CAIS [LW · GW]. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety [LW · GW]. 

 

For more general AI & alignment optimism I would suggest also Rohin Shah. See also here [LW · GW].

comment by Optimization Process · 2022-12-28T07:05:28.741Z · LW(p) · GW(p)
  • Kaj Sotala: solid. Bounty!
  • Drexler: Bounty!
  • Olah: hrrm, no bounty, I think: it argues that a particular sort of AI research is good, but seems to concede the point that pure capabilities research is bad. ("Doesn’t [interpretability improvement] speed up capabilities? Yes, it probably does—and Chris agrees that there’s a negative component to that—but he’s willing to bet that the positives outweigh the negatives.")
answer by Adam Shai · 2022-12-06T01:26:04.918Z · LW(p) · GW(p)

David Deutsch https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.daviddeutsch.org.uk/wp-content/uploads/2019/07/PossibleMinds_Deutsch.pdf&ved=2ahUKEwj16YjV6OP7AhXxL0QIHXU4DdkQFnoECDoQAQ&usg=AOvVaw0giHdn4BKOci3swaQ1bqlN

comment by Optimization Process · 2022-12-06T05:53:09.633Z · LW(p) · GW(p)

Thanks for the link!

Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!

 

It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. These three passages jumped out at me as things that I don't think would ever be written by a person with a model of AI that I remotely agree with:

Popper's argument implies that all thinking entities--human or not, biological or artificial--must create such knowledge in fundamentally the same way. Hence understanding any of those entities requires traditionally human concepts such as culture, creativity, disobedience, and morality-- which justifies using the uniform term "people" to refer to all of them.

 

Making a (running) copy of oneself entails sharing one's possessions with it somehow--including the hardware on which the copy runs--so making such a copy is very costly for the AGI.

 

All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology.

 

(I post these not in order to argue about them, just as a costly signal of my having actually engaged intellectually.) (Though, I guess if you do want to argue about them, and you convince me that I was being unfairly dismissive, I'll pay you, I dunno, triple?)

Replies from: strawberry calm, timothy-currie
comment by Cleo Nardo (strawberry calm) · 2022-12-06T10:45:33.602Z · LW(p) · GW(p)

(1) is clearly nonsense.

(2) is plausible-ish. I can certainly envisage decision theories in which cloning oneself is bad.

Suppose your decision theory is "I want to maximise the amount of good I cause" and your causal model is such that the actions of your clone do not count as caused by you (because the agency of the clone "cut off" causation flowing backwards, like a valve). Then you won't want to clone yourself. Does this decision theory emerge from SGD? Idk, but it seems roughly as SGD-simple as other decision theories.

Or, suppose you're worried that your clone will have different values than you. Maybe you think their values will drift. Or maybe you think your values will drift and you have a decision theory which tracks your future values.

(3) is this nonsense? Maybe. I think that something like "universal intelligence" might apply to collective humanity (~1.5% likelihood) in a way that makes speed and memory not that irrelevant.

More plausibly, it might be that humans are universally agentic, such that:
(a) There exists some tool AI such that for all AGI, Human + Tool is at least as agentic as the AGI.
(b) For all AGI, there exists some tool AI such that for all AGI, Human + Tool is at least as smart as the AGI.

Overall, none of these arguments gets p(Doom)<0.01, but I think they do get p(Doom)<0.99.

(p.s. I admire David Deutsch but his idiosyncratic ideology clouds his judgement. He's very pro-tech and pro-progress, and also has this Popperian mindset where the best way humans can learn is trial-and-error (which is obviously blind to existential risk).) 

comment by Tiuto (timothy-currie) · 2022-12-14T20:52:39.759Z · LW(p) · GW(p)

Deutsch has also written elsewhere about why he thinks AI doom is unlikely and I think his other arguments on this subject are more convincing. For me personally, he is who gives me the greatest sense of optimism for the future. Some of his strongest arguments are:

  1. The creation of knowledge is fundamentally unpredictable, so having strong probabilistic beliefs about the future is misguided (If the time horizon is long enough that new knowledge can be created, of course you can have predictions about the next 5 minutes). People are prone to extrapolate negative trends into the future and forget about the unpredictable creation of knowledge. Deutsch might call AI doom a kind of Malthusianism, arguing that LWers are just extrapolating AI growth and the current state of unalignment out into the future, but are forgetting about the knowledge that is going to be created in the next years and decades.
  2. He thinks that if some dangerous technology is invented, the way forward is never to halt progress, but to always advance the creation of knowledge and wealth. Deutsch argues that knowledge, the creation of wealth and our unique ability to be creative will let us humans overcome every problem that arises. He argues that the laws of physics allow any interesting problem to be solved.
  3. Deutsch makes a clear distinction between persons and non-persons. For him a person is a universal explainer and a being that is creative. That makes humans fundamentally different from other animals. He argues, to create digital persons we will have to solve the philosophical problem of what personhood is and how human creativity arises. If an AI is not a person/creative universal explainer, it won't be creative and so humanity won’t have a hard time stopping it from doing something dangerous. He is certain that current ML technology won’t lead to creativity, and so won’t lead to superintelligence.
  4. Once me manage to create AIs that are persons/creative universal explainers, he thinks, we will be able to reason with them and convince them not do anything evil. Deutsch is a moral realist and thinks any AI cleverer than humans will also be intelligent enough to come up with better ethics, so even if it could it kill us, it won’t. For him all evil arises of a lack of knowledge. So, a superintelligence would, per definition, be super moral.

I find some it these arguments convincing, and some not so much. But for now I find his specific kind of optimism to be the strongest argument against AI doom. These arguments are mostly taken from his second book. If you want to learn more about his views on AI this video might be a good place to start (although I havent yet watched it).

Replies from: TAG
comment by TAG · 2022-12-14T21:19:29.516Z · LW(p) · GW(p)

Deutsch makes a clear distinction between persons and non-persons. For him a person is a universal explainer and a being that is creative. That makes humans fundamentally different from other animals.

But he offers no evidence.

answer by FangFang (Leonieee) · 2022-12-15T07:40:16.815Z · LW(p) · GW(p)

+ 1 for Katja Grace (even though their probability may be  >1%, they have some really good arguments)

Ben Garfinkel in response to Joe Carlsmith: https://docs.google.com/document/u/0/d/1FlGPHU3UtBRj4mBPkEZyBQmAuZXnyvHU-yaH-TiNt8w/mobilebasic

Boaz Barak & Ben Edelman: https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3 [LW · GW

comment by Optimization Process · 2022-12-28T08:03:27.595Z · LW(p) · GW(p)
  • Ben Garfinkel: no bounty, sorry! It's definitely arguing in a "capabilities research isn't bad" direction, but it's very specific and kind of in the weeds.
  • Barak & Edelman: I have very mixed feelings about this one, but... yeah, I think it's bounty-worthy.
answer by teradimich · 2022-12-07T09:33:08.742Z · LW(p) · GW(p)

I have collected many quotes with links about the prospects of AGI. Most people were optimistic.

comment by Optimization Process · 2022-12-25T23:09:24.391Z · LW(p) · GW(p)

Thanks for the collection! I wouldn't be surprised if it links to something that tickles my  sense of "high-status monkey presenting a cogent argument that AI progress is good," but didn't see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!

Replies from: teradimich
comment by teradimich · 2022-12-27T19:28:10.443Z · LW(p) · GW(p)

My fault. I should just copy separate quotes and links here.

Replies from: Optimization Process
comment by Optimization Process · 2022-12-28T06:41:41.126Z · LW(p) · GW(p)

Yeah, if you have a good enough mental index to pick out the relevant stuff, I'd happily take up to 3 new bounty-candidate links, even though I've mostly closed submissions! No pressure, though!

Replies from: teradimich
comment by teradimich · 2022-12-28T09:25:44.296Z · LW(p) · GW(p)

I can provide several links. And you choose those that are suitable. If suitable. The problem is that I retained not the most complete justifications, but the most ... certain and brief. I will try not to repeat those that are already in the answers here.

Ben Goertzel

Jürgen Schmidhuber

Peter J.Bentley

Richard Loosemore

Jaron Lanier and Neil Gershenfeld


Magnus Vinding and his list

Tobias Baumann

Brian Tomasik
 

Maybe Abram Demski [LW · GW]? But he changed his mind, probably.
Well, Stuart Russell. But this is a book. I can quote.

I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”

There are also a large number of reasonable people who directly called themselves optimists or pointed out a relatively small probability of death from AI. But usually they did not justify this in ~ 500 words…

I also recommend this book.

answer by Matt Goldenberg · 2022-12-07T13:10:12.626Z · LW(p) · GW(p)

Here's Peter Thiel making fun of the rationalist doomer mindset in relation to AI, explicitly calling out both Eliezer and Bostrom as "saying nothing": https://youtu.be/ibR_ULHYirs

comment by Optimization Process · 2022-12-25T22:04:40.973Z · LW(p) · GW(p)

The relevant section seems to be 26:00-32:00. In that section, I, uh... well, I perceive him as just projecting "doomerism is bad" vibes, rather than making an argument containing falsifiable assertions and logical inferences. No bounty!

answer by Bart Bussmann (Stuckwork) · 2022-12-07T10:15:20.179Z · LW(p) · GW(p)

Francois Chollet on the implausibility of intelligence explosion :

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

comment by Optimization Process · 2022-12-25T23:03:55.456Z · LW(p) · GW(p)

Respectable Person: check.  Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!

It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. His arguments are, roughly:

  • Intelligence is situational / human brains can't pilot octopus bodies.
    • ("Smarter than a smallpox virus" is as meaningful as "smarter than a human" -- and look what happened there.)
  • Environment affects how intelligent a given human ends up. "...an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human."
    • (That's not a relevant scenario, though! How about an AI merely as smart as I am, which can teleport through the internet, save/load snapshots of itself, and replicate endlessly as long as each instance can afford to keep a g4ad.16xlarge EC2 instance running?)
  • Human civilization is vastly more capable than individual humans. "When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation...  Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip."
    • (This argument does not distinguish between "ability to design self-replicating nanomachinery" and "ability to produce beautiful digital art.")
  • Intelligences can't design better intelligences. "This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred."
    • (This argument does not distinguish between "ability to design intelligence" and "ability to design weapons that can level cities"; neither had ever happened, until one did.)
answer by osten · 2022-12-05T17:44:10.138Z · LW(p) · GW(p)

Jeff Hawkins may qualify, see his first Lex Fridman interview: 1:55:19.

comment by Optimization Process · 2022-12-06T07:08:43.270Z · LW(p) · GW(p)

Thanks for the link!

Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!

 

It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. If I had to point at parts that seemed unreasonable, I'd choose (a) the comparison of [X-risk from superintelligent AIs] to [X-risk from bacteria] (intelligent adversaries seem obviously vastly more worrisome to me!) and (b) "why would I... want to have a system that wants to reproduce? ...Those are bad things, don't do that... regulate those." (Everyone will not just!)

(I post these points not in order to argue about them, just as a costly signal of my having actually engaged intellectually.) (Though, I guess if you do want to argue about them, and you convince me that I was being unfairly dismissive, I'll pay you, I dunno, triple?)

answer by Arthur Conmy · 2022-12-06T05:57:03.846Z · LW(p) · GW(p)

Yann LeCun contra instrumental convergence [LW · GW]

EDIT: oops!

comment by Optimization Process · 2022-12-06T07:10:56.484Z · LW(p) · GW(p)

No bounty, sorry! I've already read it quite recently. (In fact, my question linked it as an example of the sort of thing that would win a bounty. So you show good taste!)

2 comments

Comments sorted by top scores.

comment by rvnnt · 2022-12-06T13:48:06.330Z · LW(p) · GW(p)

Meta: I agree that looking at arguments for different sides is better than only looking at arguments for one side; but

[...] neutralizing my status-yuck reaction. One promising-seeming approach is to spend a lot of time looking at lots of of high-status monkeys who believe it!

sounds like trying to solve the problem by using more of the problem? I think it's worth flagging that {looking at high-status monkeys who believe X} is not addressing the root problem, and it might be worth spending some time on trying to understand and solve the root problem.

I'm sad to say that I myself do not have a proper solution to {monkey status dynamics corrupting ability to think clearly}. That said, I do sometimes find it helpful to thoroughly/viscerally imagine being an alien who just arrived on Earth, gained access to rvnnt's memories/beliefs, and is now looking at this whole Earth-circus from the perspective of a dispassionately curious outsider with no skin in the game.

If anyone has other/better solutions, I'd be curious to hear them.