Posts

bigbird's Shortform 2022-06-22T19:49:02.976Z

Comments

Comment by bigbird on Do you consider your current, non-superhuman self aligned with “humanity” already? · 2022-06-25T05:14:02.510Z · LW · GW

This is just "are you a good person" with few or no subtle twists, right?

Comment by bigbird on bigbird's Shortform · 2022-06-25T02:20:45.626Z · LW · GW

Just FYI, TT, please keep telling people about value sharding! Telling people about working solutions to alignment subproblems is a really good thing!!

Comment by bigbird on bigbird's Shortform · 2022-06-25T01:51:28.528Z · LW · GW

Ah, that wasn't my intention at all!

Comment by bigbird on bigbird's Shortform · 2022-06-25T00:20:40.832Z · LW · GW

A side-lecture Keltham gives in Eliezer's story reminds me about some interactions I'd have with my dad as a kid. We'd be playing baseball, and he'd try to teach me some mechanical motion, and if I didn't get it or seemed bored he'd say "C'mon ${name}, it's phsyics! F=ma!"

Comment by bigbird on Linkpost: Robin Hanson - Why Not Wait On AI Risk? · 2022-06-24T19:03:01.972Z · LW · GW

Different AIs run built and run by different organizations would have different utility functions and may face equal competition from one another, that's fine. My problem is the part after that where he implies (says?) that the Google StockMaxx AI supercluster would face stiff competition from the humans at FBI & co.

Comment by bigbird on bigbird's Shortform · 2022-06-24T18:47:30.260Z · LW · GW

[Removed, was meant to be nice but I can see how it could be taken the other way]

Comment by bigbird on Raphaël Millière on Generalization and Scaling Maximalism · 2022-06-24T18:29:53.982Z · LW · GW

I think it'd be good to get these people who dismiss deep learning to explicitly state whether or not the only thing keeping us from imploding, is an inability by their field to solve a core problem it's explicitly trying to solve. In particular it seems weird to answer a question like "why isn't AI X-risk a problem" with "because the ML industry is failing to barrel towards that target fast enough".

Comment by bigbird on Linkpost: Robin Hanson - Why Not Wait On AI Risk? · 2022-06-24T18:04:40.939Z · LW · GW

I am slightly baffled that someone who has lucidly examined all of the ways in which corporations are horribly misaligned and principle-agent problems are everywhere, does not see the irony in saying that managing/regulating/policing those corporations will be similar to managing an AI supercluster totally united by the same utility function.

Comment by bigbird on Feature request: voting buttons at the bottom? · 2022-06-24T17:24:07.544Z · LW · GW

Why not also have author names at the bottom, while you're at it.

Comment by bigbird on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-06-24T00:56:26.422Z · LW · GW

too radical

Comment by bigbird on bigbird's Shortform · 2022-06-23T19:28:15.882Z · LW · GW

The craziest part of being a rationalist is regularly reading completely unrelated technical content, thinking "this person seems lucid", then going to their blog and seeing that they are Martin Sustrik.

Comment by bigbird on Half-baked AI Safety ideas thread · 2022-06-23T18:38:30.687Z · LW · GW

peaceful protest of the acceleration of agi technology without an actually specific written & coherent plan for what we will do when we get there

Comment by bigbird on bigbird's Shortform · 2022-06-23T05:05:54.868Z · LW · GW

Seriously this is the funniest shit

Comment by bigbird on bigbird's Shortform · 2022-06-23T03:22:40.121Z · LW · GW

Nothing Yudkowsky has ever done has impressed me as much as noticing the timestamps on the Mad Investor Chaos glowfic. My peewee brain is in shock. 

How much coordination went on behind the scenes to get the background understanding of the world? Do they list out plot points and story beats before each session? What proportion of what I'm seeing is railroaded vs. made up on the spot? I really wish I had these superpowers, damnit.

Comment by bigbird on The inordinately slow spread of good AGI conversations in ML · 2022-06-22T17:38:03.983Z · LW · GW

You went from saying telling the general public about the problem is net negative to saying that it's got an opportunity cost, and there are probably unspecified better things to do with your time. I don't disagree with the latter.

Comment by bigbird on The inordinately slow spread of good AGI conversations in ML · 2022-06-22T16:37:54.534Z · LW · GW

One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it's the truth, and even regular people that don't have anything to immediately contribute to the problem deserve to know if they're gonna die in 10-25 years.