Vivek Hebbar's Shortform

post by Vivek Hebbar (Vivek) · 2022-11-24T02:57:56.187Z · LW · GW · 5 comments

Contents

5 comments

5 comments

Comments sorted by top scores.

comment by Vivek Hebbar (Vivek) · 2023-10-11T23:58:28.328Z · LW(p) · GW(p)

It's sad that agentfoundations.org links no longer work, leading to broken links in many decision theory posts (e.g. here and here [LW · GW])

Replies from: habryka4, Vladimir_Nesov
comment by habryka (habryka4) · 2023-10-12T03:25:24.287Z · LW(p) · GW(p)

Oh, hmm, this seems like a bug on our side. I definitely set up a redirect a while ago that should make those links work. My guess is something broke in the last few months.

comment by Vladimir_Nesov · 2023-10-12T03:17:08.629Z · LW(p) · GW(p)

Thanks for the heads up. Example broken link (https://agentfoundations.org/item?id=32), currently redirects to broken [? · GW] https://www.alignmentforum.org/item?id=32, should redirect further to https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e7d/exploiting-edt (Exploiting EDT [AF · GW][1]), archive.today snapshot.

Edit 14 Oct: It works now, even for links to comments, thanks LW team!


  1. LW confusingly replaces the link to www.alignmentforum.org given in Markdown comment source text with a link to www.lesswrong.com when displaying the comment on LW. ↩︎

comment by Vivek Hebbar (Vivek) · 2022-11-24T02:57:56.602Z · LW(p) · GW(p)

A framing I wrote up for a debate about "alignment tax":

  1. "Alignment isn't solved" regimes:
    1. Nobody knows how to make an AI which is {safe, general, and broadly superhuman}, with any non-astronomical amount of compute
    2. We know how to make an aligned AGI with 2 to 25 OOMs more compute than making an unaligned one
  2. "Alignment tax" regimes:
    1. We can make an aligned AGI, but it requires a compute overhead in the range 1% - 100x.  Furthermore, the situation remains multipolar and competitive for a while.
    2. The alignment tax is <0.001%, so it's not a concern.
    3. The leading coalition is further ahead than the alignment tax amount, and can and will execute a pivotal act, thus ending the risk period and rendering the alignment tax irrelevant.

A person whose mainline is {1a --> 1b --> 2b or 2c} might say "alignment is unsolved, solving it mostly a discrete thing, and alignment taxes and multipolar incentives aren't central"

Whereas someone who thinks we're already in 2a might say "alignment isn't hard, the problem is incentives and competitiveness"

Someone whose mainline is {1a --> 2a} might say "We need to both 'solve alignment at all' AND either get the tax to be really low or do coordination.  Both are hard, and both are necessary."