post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by adamShimi · 2022-05-22T11:06:38.398Z · LW(p) · GW(p)

In defence of attempting unnatural or extreme strategies

As a response to Rob's post, this sounds too me like it's misunderstanding what he was pointing out. I don't think he was pointing to weird ideas — I'm pretty sure he knows very well we need new ideas to solve alignment. What he was pointing out was people who are so panicked that they start discussing literal crimes or massive governance interventions that are incredibly hard to predict.

Replies from: None
comment by [deleted] · 2022-05-22T13:56:53.454Z · LW(p) · GW(p)
comment by Ulisse Mini (ulisse-mini) · 2022-05-20T13:20:28.033Z · LW(p) · GW(p)

Upvoted because I think there should be more of a discussion around this then "Obviously getting normal people involved will only make things worse" (which seems kind of arrogant / assumes there are no good unknown unknowns)

Replies from: None
comment by [deleted] · 2022-05-20T14:02:58.427Z · LW(p) · GW(p)Replies from: ulisse-mini, Chris_Leong
comment by Ulisse Mini (ulisse-mini) · 2022-05-20T14:28:48.860Z · LW(p) · GW(p)

Yes, I'm not convinced either way myself but here are some arguments against:

  • If the USA regulates AGI, China will get it first which seems worse as there's less alignment-activity in China (as for US China coordination, lol, lmao)
  • Raising awareness of AGI Alignment also raises awareness of AGI. If we communicate the "AGI" part without the "Alignment" part we could speed up timelines
  • If there's a massive influx of funding/interest from people who aren't well informed, it could lead to "substitution hazards" like work on aligning weak models with methods that don't scale to the superintelligent case (In climate change people substitute "solve climate change" to "I'll reduce my own emissions" which is useless)
  • If we convince the public AGI is a threat, there could be widespread flailing (the bad kind) which reflects badly on Alignment researchers (e.g. if DeepMind researchers are receiving threats, their system 1 might generalize to "People worried about AGI are a doomsday cult and should be disregarded")

Most of these I've heard from reading conversations on EleutherAI's discord, Connor is typically the most pessimistic but some others are pessimistic too (Connor's talk discusses substitution hazards in more detail)

TLDR: It's hard to control the public once they're involved. Climate change startups aren't getting public funding, the public is more interested in virtue-signaling (In the climate case the public doesn't really make things worse, but for AGI it could be different)

EDIT: I think I've presented the arguments badly, re-reading them I don't find them convincing. You should seek out someone who presents them better.

Replies from: None
comment by [deleted] · 2022-05-20T16:53:28.061Z · LW(p) · GW(p)
comment by Chris_Leong · 2022-05-21T04:39:54.869Z · LW(p) · GW(p)

I suspect that mass outreach is likely to be too slow to make a big difference and so may not be worth it given the possible downsides.

That said, I am in favour of addressing public misconceptions rather than just letting them stand.

Replies from: None
comment by [deleted] · 2022-05-21T06:23:11.192Z · LW(p) · GW(p)
comment by Alex Vermillion (tomcatfish) · 2022-05-22T17:59:40.836Z · LW(p) · GW(p)

Note about formatting:

If you're in the Markdown editor, you can make footnotes instead of *, **, ***. Do that by making one of these -> [^1] (ex output: [1]) and then at the bottom, put [^1]: Your footnote here


  1. It doesn't have to be just numbers, it can also be a word like [^example] or [^defense_of_virtue]. The words are nice because they look good in a link. ↩︎

Replies from: None
comment by [deleted] · 2022-05-23T04:40:19.253Z · LW(p) · GW(p)
comment by CuriousMeta · 2022-05-22T21:56:58.605Z · LW(p) · GW(p)

If you genuinely believe that the world is ending in 20 years, but are not visibily affected by this, or considering extreme actions, people may be less likely to believe that you believe what you say you do.

 

IMO, that's not the bottleneck. The bottleneck is people thinking you're insane, which composure mitigates.

Replies from: Daphne_W, None
comment by Daphne_W · 2022-06-18T07:51:38.518Z · LW(p) · GW(p)

It feels more to me like we're the quiet weird kid in high school that doesn't speak up or show emotion because we're afraid of getting judged or bullied. Which, fair enough, the school is sort of like - just look at poor cryonics, or even nuclear power - but the road to popularity (let along getting help with what's bugging us) isn't to try to minimize our expressions to 'proper' behavior while letting us be characterized by embarrassing past incidents (e.g. Roko's Basilisk) if we're noticed at all.

It isn't easy to build social status, but right now we're trying next to nothing and we've seen it doesn't seem to do enough.

comment by [deleted] · 2022-05-23T04:40:00.969Z · LW(p) · GW(p)Replies from: CuriousMeta
comment by CuriousMeta · 2022-06-29T13:43:45.766Z · LW(p) · GW(p)

Both, I'd think.