Posts

steven0461's Shortform Feed 2019-06-30T02:42:13.858Z
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z
Meetup : Amsterdam 2013-11-12T09:12:31.710Z
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z
Swimming in Reasons 2010-04-10T01:24:27.787Z
Disambiguating Doom 2010-03-29T18:14:12.075Z
Taking Occam Seriously 2009-05-29T17:31:52.268Z
Open Thread: May 2009 2009-05-01T16:16:35.156Z
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z
The Wrath of Kahneman 2009-03-09T12:52:41.695Z
Lies and Secrets 2009-03-08T14:43:22.152Z

Comments

Comment by steven0461 on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-12T22:57:31.103Z · LW · GW

"Problematic dynamics happened at Leverage" and "Leverage influenced EA Summit/Global" don't imply "Problematic dynamics at Leverage influenced EA Summit/Global" if EA Summit/Global had their own filters against problematic influences. (If such filters failed, it should be possible to point out where.)

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-06T23:52:14.866Z · LW · GW

Your posts seem to be about what happens if you filter out considerations that don't go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn't create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)

The people going after Steve Hsu almost entirely don't care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that's only 0.02 Hsu utils lost, which seems like a good trade.

I don't have a lot of verbal energy and find the "competing grandstanding walls of text" style of discussion draining, and I don't think the arguments I'm making are actually landing for some reason, and I'm on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-06T20:16:36.821Z · LW · GW

due to the mechanisms described in "Entangled Truths, Contagious Lies" and "Dark Side Epistemology"

I'm not advocating lying. I'm advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone's epistemic integrity.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T23:45:08.849Z · LW · GW

"Offensive things" isn't a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they're signaling to. It sounds like your reasoning is "if we don't post about the Bell Curve, they'll just start taking offense to technological forecasting, and we'll be back where we started but with a more restricted topic space". But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T23:06:13.343Z · LW · GW

You'd have to use a broad sense of "political" to make this true (maybe amounting to "controversial"). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of "if you can make a case that it's genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead". At no point could anyone have used the proposed norms to prevent discussion of AI alignment.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T22:54:17.790Z · LW · GW

Another way this matters: Offense takers largely get their intuitions about "will taking offense achieve my goals" from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate "will taking offense specifically against LessWrong achieve my goals", but most actors simply aren't paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn't care much about, like, I don't know, fricking Sargon of Akkad.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T22:44:40.693Z · LW · GW

I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what's actually going on. If you look at people of the sneer, it's not at all clear that saying offensive things thwarts their goals. They're pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren't reliable), arguably de-energizes such people more than it emboldens them.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T22:22:43.017Z · LW · GW

My claim was:

if this model is partially true, then something more nuanced than an absolutist "don't give them an inch" approach is warranted

It's obvious to everyone in the discussion that the model is partially false and there's also a strategic component to people's emotions, so repeating this is not responsive.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T05:50:06.518Z · LW · GW

I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they're at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he's presented with an idea that's noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn't help him achieve his goals, but he'll flatten buildings anyway because that's just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it's not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it's not clear to me that there's an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that's unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla's opinions.

If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:

The implied game is:

Step 1: The bull decides what is offensively red

Step 2: LW people decide what cloths to wave given this

Steven is proposing a policy for step 2 that doesn't wave anything that the bull has decided is offensively red. This gives the bull the ability to prevent arbitrary cloth-waving.

If the bull is offended by negotiating for more than $1 in the ultimatum game, Steven's proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by waving cloths without being gored by the bull)

I think "wave your cloths at home or in another field even if it's not as good" ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist "don't give them an inch" approach is warranted.

edit: I should clarify that when I say Godzilla flattens buildings, I'm mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T04:48:56.218Z · LW · GW

standing up to all kinds of political entryism seems to me obviously desirable for its own sake

I agree it's desirable for its own sake, but meant to give an additional argument why even those people who don't agree it's desirable for its own sake should be on board with it.

if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter

Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant "neutral" observers.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T04:42:13.719Z · LW · GW

"Stand up to X by not doing anything X would be offended by" is not what I proposed. I was temporarily defining "right wing" as "the political side that the left wing is offended by" so I could refer to posts like the OP as "right wing" without setting off a debate about how actually the OP thinks of it more as centrist that's irrelevant to the point I was making, which is that "don't make LessWrong either about left wing politics or about right wing politics" is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine "1+1=2" as right wing politics or something, but I'm advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being "maximally yielding in an ultimatum game", because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you're free to use to adjust your degree of yielding. Nor does "standing up to political entryism" even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-04T21:22:41.319Z · LW · GW

Some more points I want to make:

  • I don't care about moderation decisions for this particular post, I'm just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity's foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
  • On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
  • Whether political pressure leaks into technical subjects mostly depends on people's meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
  • The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is "this is on the same website as something offensive, so I'm not going to listen to it". "Let's keep putting important things on the same website as unimportant and offensive things until they learn" is not a strategy that I expect to work here.
  • It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case "right wing" means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.

I'm not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-04T20:21:52.414Z · LW · GW

I agree that LW shouldn't be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I'm not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of "don't bring up highly inflammatory topics unless they're crucial to the site's core interests" wouldn't stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann's theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the "crucial to the site's core interests" thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I'm open to hearing it.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-04T05:45:19.286Z · LW · GW

My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same.

Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-04T00:07:23.616Z · LW · GW

And so you need to make a pitch not just "this pays for itself now" but instead something like "this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself."

I don't think it will be obvious, but I think we'll be able to make an imperfect estimate of when to change the policy that's still better than giving up on future evaluation of such tradeoffs and committing reputational murder-suicide immediately. (I for one like free speech and will be happy to advocate for it on LW when conditions change enough to make it seem anything other than pointlessly self-destructive.)

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-03T23:56:20.444Z · LW · GW

I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not)

A global ban on political discussion by rationalists might be a big sacrifice, but it seems to me there are no major costs to asking people to take it elsewhere.

(I just edited "would be a big sacrifice" to "might be a big sacrifice", because the same forces that cause a ban to seem like a good idea will still distort discussions even in the absence of a ban, and perhaps make them worse than useless because they encourage the false belief that a rational discussion is being had.)

Comment by steven0461 on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-03T03:13:26.405Z · LW · GW

This could be through any number of mechanisms like

A story I'm worried about goes something like:

  • LW correctly comes to believe that for an AI to be aligned, its cognitive turboencabulator needs a base plate of prefabulated amulite
  • the leader of an AI project tries to make the base plate out of unprefabulated amulite
  • another member of the project mentions off-hand one time that some people think it should be prefabulated
  • the project leader thinks, "prefabulation, wasn't that one of the pet issues of those Bell Curve bros? well, whatever, let's just go ahead"
  • the AI is built as planned and attains superhuman intelligence, but its cognitive turboencabulator fails, causing human extinction
Comment by steven0461 on 2020 PhilPapers Survey Results · 2021-11-02T22:15:41.856Z · LW · GW

Taking the second box is greedy and greed is a vice. This might also explain one-boxing by Marxists.

Comment by steven0461 on 2020 PhilPapers Survey Results · 2021-11-02T22:07:47.726Z · LW · GW

I also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity "you" refers to is just a semantic issue.

Comment by steven0461 on 2020 PhilPapers Survey Results · 2021-11-02T21:53:02.979Z · LW · GW

With Newcomb's Problem, I always wonder how much the issue is confounded by formulations like "Omega predicted correctly in 99% of past cases", where given some normally reasonable assumptions (even really good predictors probably aren't running a literal copy of your mind), it's easy to conclude you're being reflective enough about the decision to be in a small minority of unpredictable people. I would be interested in seeing statistics on a version of Newcomb's Problem that explicitly said Omega predicts correctly all of the time because it runs an identical copy of you and your environment.

Comment by steven0461 on Tell the Truth · 2021-11-01T22:34:56.803Z · LW · GW

Obviously the idea is not to never risk making enemies, but the future is to some extent a hostage negotiation, and, airy rhetoric aside, it's a bad idea to insult a hostage taker's mother, causing him to murder lots of hostages, even if she's genuinely a bad person who deserves to be called out.

Comment by steven0461 on Tell the Truth · 2021-10-31T03:25:45.578Z · LW · GW

Even in the complete absence of personal consequences, expressing unpopular opinions still brings disrepute on other opinions that are logically related or held by the same people. E.g., if hypothetically there were a surprisingly strong argument for murdering puppies, I would keep it to myself, because only people who care about surprisingly strong arguments would accept it, and others would hate them for it, impeding their ability to do all the less horrible and more important things that there are surprisingly strong arguments for.

Comment by steven0461 on Voting for people harms people · 2021-10-30T02:25:21.341Z · LW · GW

The harms described in these articles mostly arise from politicization associated with voting rather than from the act of voting itself. If you focused on that politicization, without asking people to give up their direct influence on which candidates were elected, I think there'd be much less unwillingness to discuss.

Comment by steven0461 on steven0461's Shortform Feed · 2021-10-25T21:09:03.588Z · LW · GW

There's still a big gap between Betfair/Smarkets (22% chance Trump president) and Predictit/FTX (29-30%). I assume it's not the kind of thing that can be just arbitraged away.

Comment by steven0461 on steven0461's Shortform Feed · 2021-10-25T21:02:53.469Z · LW · GW

Another thing I feel like I see a lot on LW is disagreements where there's a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they're just substituting the thumb for whatever parts of their reasoning or intuition they're not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.

Comment by steven0461 on steven0461's Shortform Feed · 2021-10-25T20:43:41.619Z · LW · GW

What's the name of the proto-fallacy that goes like "you should exchange your oranges for pears because then you'll have more pears", suggesting that the question can be resolved, or has already been resolved, without ever considering the relative value of oranges and pears? I feel like I see it everywhere a lot, including on LW.

Comment by steven0461 on steven0461's Shortform Feed · 2021-10-24T00:13:37.040Z · LW · GW

Suppose you have an AI powered world stabilization regime. Suppose somebody makes a reasonable moral argument about how humanity's reflection should proceed, like "it's unfair for me to have less influence just because I hate posting on Facebook". Does the world stabilization regime now add a Facebook compensation factor to the set of restrictions it enforces? If it does things like this all the time, doesn't the long reflection just amount to a stage performance of CEV with human actors? If it doesn't do things like this all the time, doesn't that create a serious risk of the long term future being stolen by some undesirable dynamic?

Comment by steven0461 on Petrov Day Retrospective: 2021 · 2021-10-22T01:24:54.192Z · LW · GW

If Petrov pressing the button would have led to a decent chance of him being incinerated by American nukes, and if he valued his life much more than he valued avoiding the consequences he could expect to face for not pressing, then he had no reason to press the button even from a purely selfish perspective, and pressing it would have been a purely destructive act, like in past LW Petrov Days, or maybe a kind of Russian roulette.

Comment by steven0461 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T00:48:50.023Z · LW · GW

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

Comment by steven0461 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T23:14:45.490Z · LW · GW

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

Comment by steven0461 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T20:29:09.862Z · LW · GW

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Comment by steven0461 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T21:26:05.525Z · LW · GW

It sounds like they meant they used to work at CFAR, not that they currently do.

The interpretation of "I'm a CFAR employee commenting anonymously to avoid retribution" as "I'm not a CFAR employee, but used to be one" seems to me to be sufficiently strained and non-obvious that we should infer from the commenter's choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they're a current CFAR employee.

Comment by steven0461 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T20:35:08.905Z · LW · GW

Or maybe you should move out of the Bay Area, a.s.a.p. (Like, half seriously, I wonder how much of this epistemic swamp is geographically determined. Not having the everyday experience, I don't know.)

I wonder what the rationalist community would be like if, instead of having been forced to shape itself around risks of future superintelligent AI in the Bay Area, it had been artificial computing superhardware in Taiwan, or artificial superfracking in North Dakota, or artificial shipping supercontainers in Singapore, or something. (Hypothetically, let's say the risks and opportunities of these technologies were equally great and equally technically and philosophically complex as those of AI in our universe.)

Comment by steven0461 on How to think about and deal with OpenAI · 2021-10-13T23:08:27.401Z · LW · GW

Hmm, I was imagining that in Anna's view, it's not just about what concrete social media or other venues exist, but about some social dynamic that makes even the informal benevolent conspiracy part impossible or undesirable.

Comment by steven0461 on How to think about and deal with OpenAI · 2021-10-13T20:06:17.301Z · LW · GW

a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways

functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations

What's standing in the way of these being created?

Comment by steven0461 on What role should LW play in AI Safety? · 2021-10-04T19:34:28.688Z · LW · GW

By being the community out of which MIRI arose

I would say the LW community arose out of MIRI.

Comment by steven0461 on Great Power Conflict · 2021-09-17T20:14:02.460Z · LW · GW

Preemptive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.

FWIW, many distinguish between preemptive and preventive war, where the scenario you described falls under "preventive", and "preemptive" implies an imminent attack from the other side.

Comment by steven0461 on A simulation basilisk · 2021-09-17T19:40:18.960Z · LW · GW

Agents using simulations to influence other simulations seems less likely than than agents using simulations to influence reality, which after all is causally upstream of all the simulations.

Comment by steven0461 on Why didn't we find katas for rationality? · 2021-09-14T19:58:20.822Z · LW · GW

People like having superpowers and don't like obeying duties, so those who try to spread rationality are pressured to present it as a superpower instead of a duty.

Comment by steven0461 on Why didn't we find katas for rationality? · 2021-09-14T19:55:05.292Z · LW · GW

Why aren't there katas for diet? Because diet is about not caving to strong temptations inherent in human nature, and it's hard to practice not doing something. Maybe rationality is the same, but instead of eating bad foods, the temptation is allowing non-truth-tracking factors to influence your beliefs.

Comment by steven0461 on wunan's Shortform · 2021-09-12T18:02:20.803Z · LW · GW

There was some previous discussion here.

Comment by steven0461 on [deleted post] 2021-09-07T22:31:44.497Z

Why would they want the state of the universe to be unnatural on Earth but natural outside the solar system?

edit: I think aliens that wanted to prevent us from colonizing the universe would either destroy us, or (if they cared about us) help us, or (if they had a specific weird kind of moral scruples) openly ask/force us not to colonize, or (if they had a specific weird kind of moral scruples and cared about being undetected or not disturbing the experiment) undetectably guide us away from colonization. Sending a very restricted ambiguous signal seems to require a further unlikely motivation.

Comment by steven0461 on steven0461's Shortform Feed · 2021-09-07T19:12:56.328Z · LW · GW

According to electionbettingodds.com, this morning, Trump president 2024 contracts went up from about 0.18 to 0.31 on FTX but not elsewhere. Not sure what's going on there or if people can make money on it.

Comment by steven0461 on [deleted post] 2021-09-07T18:54:38.792Z

perhaps the aliens are like human environmentalists who like to keep everything in its natural state

Surely if they were showing themselves to the military then that would put us in an unnatural state.

Comment by steven0461 on Could you have stopped Chernobyl? · 2021-08-29T00:02:33.542Z · LW · GW

Preventing a one-off disastrous experiment like Chernobyl isn't analogous to the coming problem of ensuring the safety of a whole field whose progress is going to continue to be seen as crucial for economic, humanitarian, military, etc. reasons. It's not even like there's a global AI control room where one could imagine panicky measures making a difference. The only way to make things work out in the long term is to create a consensus about safety in the field. If experts feel like safety advocates are riling up mobs against them, it will just harden their view of the situation as nothing more than a conflict between calm, reasonable scientists and e.g. over-excitable doomsday enthusiasts unbalanced by fictional narratives.

Comment by steven0461 on What 2026 looks like · 2021-08-06T19:36:09.129Z · LW · GW

Is it naive to imagine AI-based anti-propaganda would also be significant? E.g. "we generated AI propaganda for 1000 true and 1000 false claims and trained a neural net to distinguish between the two, and this text looks much more like propaganda for a false claim".

What does GDP growth look like in this world?

Another reason the hype fades is that a stereotype develops of the naive basement-dweller whose only friend is a chatbot and who thinks it’s conscious and intelligent.

Things like this go somewhat against my prior for how long it takes for culture to change. I can imagine it becoming an important effect over 10 years more easily than over 1 year. Splitting the internet into different territories also sounds to me like a longer term thing.

Comment by steven0461 on steven0461's Shortform Feed · 2021-07-02T19:32:13.099Z · LW · GW

It's complicated. Searching the article for "structural uncertainty" gives 10 results about ways they've tried to deal with it. I'm not super confident that they've dealt with it adequately.

Comment by steven0461 on steven0461's Shortform Feed · 2021-07-02T19:25:04.992Z · LW · GW

There's a meme in EA that climate change is particularly bad because of a nontrivial probability that sensitivity to doubled CO2 is in the extreme upper tail. As far as I can tell, that's mostly not real. This paper seems like a very thorough Bayesian assessment that gives 4.7 K as a 95% upper bound, with values for temperature rise by 2089 quite tightly constrained (Fig 23). I'd guess this is an overestimate based on conservative choices represented by Figs 11, 14, and 18. The 5.7 K 95% upper bound after robustness tests comes from changing the joint prior over feedbacks to create a uniform prior on sensitivity, which as far as I can tell is unjustified. Maybe someone who's better at rhetoric than me should figure out how to frame all this in a way that predictably doesn't make people flip out. I thought I should post it, though.

For forecasting purposes, I'd recommend this and this as well, relevant to the amount of emissions to expect from nature and humans respectively.

Comment by steven0461 on steven0461's Shortform Feed · 2021-06-18T21:56:01.922Z · LW · GW

Thinking out loud about some arguments about AI takeoff continuity:

If a discontinuous takeoff is more likely to be local to a particular agent or closely related set of agents with particular goals, and a continuous takeoff is more likely to be global, that seems like it incentivizes the first agent capable of creating a takeoff to make sure that that takeoff is discontinuous, so that it can reap the benefits of the takeoff being local to that agent. This seems like an argument for expecting a discontinuous takeoff and an important difference with other allegedly analogous technologies.

I have some trouble understanding the "before there are strongly self-improving AIs there will be moderately self-improving AIs" argument for continuity. Is there any reason to think the moderate self-improvement ability won't be exactly what leads to the strong self-improvement ability? Before there's an avalanche, there's probably a smaller avalanche, but maybe the small avalanche is simply identical to the early part of the large avalanche.

Where have these points been discussed in depth?

Comment by steven0461 on steven0461's Shortform Feed · 2021-06-09T19:33:19.146Z · LW · GW

Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus)

Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.