Posts

steven0461's Shortform Feed 2019-06-30T02:42:13.858Z · score: 36 (7 votes)
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z · score: 28 (19 votes)
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z · score: 0 (1 votes)
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z · score: 1 (2 votes)
Meetup : Amsterdam 2013-11-12T09:12:31.710Z · score: 4 (5 votes)
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z · score: 48 (48 votes)
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z · score: 3 (4 votes)
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z · score: 2 (3 votes)
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z · score: 6 (7 votes)
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z · score: 8 (9 votes)
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z · score: 10 (11 votes)
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z · score: 9 (10 votes)
Swimming in Reasons 2010-04-10T01:24:27.787Z · score: 17 (18 votes)
Disambiguating Doom 2010-03-29T18:14:12.075Z · score: 16 (17 votes)
Taking Occam Seriously 2009-05-29T17:31:52.268Z · score: 28 (28 votes)
Open Thread: May 2009 2009-05-01T16:16:35.156Z · score: 4 (5 votes)
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z · score: 158 (230 votes)
The Wrath of Kahneman 2009-03-09T12:52:41.695Z · score: 25 (26 votes)
Lies and Secrets 2009-03-08T14:43:22.152Z · score: 14 (25 votes)

Comments

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-23T17:40:26.470Z · score: 2 (1 votes) · LW · GW

I'd be interested to hear what what size of delay you used, and what your reasoning for that was.

I didn't think very hard about it and just eyeballed the graph. Probably a majority of "negligible on this scale" and a minority of "years or (less likely) decades" if we've defined AGI too loosely and the first AGI isn't a huge deal, or things go slowly for some other reason.

Was your main input into this parameter your perceptions of what other people would believe about this parameter?

Yes, but only because those other people seem to make reasonable arguments, so that's kind of like believing it because of the arguments instead of the people. Some vague model of the world is probably also involved, like "avoiding AI x-risk seems like a really hard problem but it's probably doable with enough effort and increasingly many people are taking it very seriously".

If so, I'd be interested to hear whose beliefs you perceive yourself to be deferring to here.

MIRI people and Wei Dai for pessimism (though I'm not sure it's their view that it's worse than 50/50), Paul Christiano and other researchers for optimism. 

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-22T17:33:09.240Z · score: 9 (6 votes) · LW · GW

For my prediction (which I forgot to save as a linkable snapshot before refreshing, oops) roughly what I did was take my distribution for AGI timing (which ended up quite close to the thread average), add an uncertain but probably short delay for a major x-risk factor (probably superintelligence) to appear as a result, weight it by the probability that it turns out badly instead of well (averaging to about 50% because of what seems like a wide range of opinions among reasonable well-informed people, but decreasing over time to represent an increasing chance that we'll know what we're doing), and assume that non-AI risks are pretty unlikely to be existential and don't affect the final picture very much. To an extent, AGI can stand in for highly advanced technology in general.

If I start with a prior where the 2030s and the 2090s are equally likely, it feels kind of wrong to say I have the 7-to-1 evidence for the former that I'd need for this distribution. On the other hand, if I made the same argument for the 2190s and the 2290s, I'd quickly end up with an unreasonable distribution. So I don't know.

Comment by steven0461 on Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem · 2020-09-18T20:02:34.886Z · score: 15 (5 votes) · LW · GW

some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it's not just the basic problem statement that's important, but potentially many other ideas that aren't yet popular; picking battles isn't lying; arguing about sensitive subjects is fun and I don't think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I'm not against systematically truthseeking discussions of sensitive subjects, just if it's in public in a way that's associated with the rationalism brand

Comment by steven0461 on Forecasting Thread: AI Timelines · 2020-08-24T05:45:21.411Z · score: 15 (5 votes) · LW · GW

Here's my prediction:

To the extent that it differs from others' predictions, probably the most important factor is that I think even if AGI is hard, there are a number of ways in which human civilization could become capable of doing almost arbitrarily hard things, like through human intelligence enhancement or sufficiently transformative narrow AI. I think that means the question is less about how hard AGI is and more about general futurism than most people think. It's moderately hard for me to imagine how business as usual could go on for the rest of the century, but who knows.

Comment by steven0461 on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-10T18:59:03.205Z · score: 5 (3 votes) · LW · GW

"Acausalism" works, but might be confused with the idea that acausal dependence matters at all, or with other philosophical doctrines that deny causality in some sense.

I'm not sure whether to be located in a place is a different thing from the place subjunctively depending on your behavior.

Some more ideas: "outofreachism" (closest to "longtermism"), "extrauniversalism", "subjunctive dependentism" (hardest to strawman), "elsewherism", "spooky axiology at a distance"

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T21:01:13.632Z · score: 4 (2 votes) · LW · GW

I don't think anyone understands the phrase "rationalist community" as implying a claim that its members don't sometimes allow practical considerations to affect which topics they remain silent on. I don't advocate that people leave out good points merely for being inconvenient to the case they're making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don't start talking about how ugly the judge's children are even if you think the ugliness of the judge's children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge's children are.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T17:02:50.284Z · score: 10 (2 votes) · LW · GW

I think of my team as being "Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned". I don't think calling it "Team Seek Power For The Greater Good" is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T16:25:18.497Z · score: 2 (1 votes) · LW · GW

As I see it, we've had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T00:59:40.411Z · score: 3 (2 votes) · LW · GW
I think that negative low-level associations really matter if you're trying to be a mass movement and scale, like a political movement.

Many of the world's smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don't understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.

Comment by steven0461 on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T17:57:01.678Z · score: 5 (3 votes) · LW · GW

I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:46:13.791Z · score: 4 (2 votes) · LW · GW
Maybe restrict viewing to people with enough less wrong karma.

This is much better than nothing, but it would be much better still for a trusted person to hand-pick people who have strongly demonstrated both the ability to avoid posting pointlessly disreputable material and the unwillingness to use such material in reputational attacks.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:35:31.586Z · score: 4 (2 votes) · LW · GW

I wonder what would happen if a forum had a GPT bot making half the posts, for plausible deniability. (It would probably make things worse. I'm not sure.)

Comment by steven0461 on steven0461's Shortform Feed · 2020-06-24T17:24:05.996Z · score: 4 (2 votes) · LW · GW

There's been some discussion of tradeoffs between a group's ability to think together and its safety from reputational attacks. Both of these seem pretty essential to me, so I wish we'd move in the direction of a third option: recognizing public discourse on fraught topics as unavoidably farcical as well as often useless, moving away from the social norm of acting as if a consideration exists if and only if there's a legible Post about it, building common knowledge of rationality and strategic caution among small groups, and in general becoming skilled at being esoteric without being dishonest or going crazy in ways that would have been kept in check by larger audiences. I think people underrate this approach because they understandably want to be thought gladiators flying truth as a flag. I'm more confident of the claim that we should frequently acknowledge the limits of public discourse than the other claims here.

Comment by steven0461 on Superexponential Historic Growth, by David Roodman · 2020-06-17T00:10:43.523Z · score: 13 (4 votes) · LW · GW

The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.

(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-03-02T21:22:41.536Z · score: 2 (3 votes) · LW · GW
Even in private, in today's environment I'd be afraid to talk about some of the object-level things because I can't be sure you're not a true believer in some of those issues and try to "cancel" me for my positions or even my uncertainties.

This seems like a problem we could mitigate with the right kinds of information exchange. E.g., I'd probably be willing to make a "no canceling anyone" promise depending on wording. Creating networks of trust around this is part of what I meant by "epistemic prepping" upthread.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T01:57:28.737Z · score: 2 (1 votes) · LW · GW

I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T00:37:34.958Z · score: 6 (3 votes) · LW · GW

If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.

(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-18T19:18:38.789Z · score: 1 (2 votes) · LW · GW

Probably it makes more sense to prepare for scenarios where ideological fanaticism is widespread but isn't wielding government power.

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-17T21:35:40.366Z · score: 10 (6 votes) · LW · GW

I think it makes sense to take an "epistemic prepper" perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic "fire alarm"?

Comment by steven0461 on Preliminary thoughts on moral weight · 2020-01-13T20:07:50.718Z · score: 4 (2 votes) · LW · GW

As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.

Comment by steven0461 on Are "superforecasters" a real phenomenon? · 2020-01-11T19:16:19.068Z · score: 4 (2 votes) · LW · GW

From participating on Metaculus I certainly don't get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there's a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That's substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don't know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word "superforecasters" has no content outside of "the best forecasters" and is just there to make the field of research sound more exciting.

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T20:56:37.176Z · score: 12 (3 votes) · LW · GW
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?

Maybe not; probably; yes.

Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)

Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)

Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).

Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.

"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn't like it.

It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!

I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.

If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.

(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-05T18:20:21.593Z · score: 11 (3 votes) · LW · GW

Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.

Comment by steven0461 on Since figuring out human values is hard, what about, say, monkey values? · 2020-01-01T23:25:25.465Z · score: 1 (1 votes) · LW · GW

Given that animals don't act like expected utility maximizers, what do you mean when you talk about their values? For humans, you can ground a definition of "true values" in philosophical reflection (and reflection about how that reflection relates to their true values, and so on), but non-human animals can't do philosophy.

Comment by steven0461 on Don't Double-Crux With Suicide Rock · 2020-01-01T20:51:47.651Z · score: 27 (9 votes) · LW · GW

Honest rational agents can still disagree if the fact that they're all honest and rational isn't common knowledge.

Comment by steven0461 on Speaking Truth to Power Is a Schelling Point · 2019-12-30T21:23:26.134Z · score: 28 (8 votes) · LW · GW

If the slope is so slippery, how come we've been standing on it for over a decade? (Or do you think we're sliding downward at a substantial speed? If so, how can we turn this into a disagreement about concrete predictions about what LW will be like in 5 years?)

Comment by steven0461 on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-28T23:28:18.017Z · score: 14 (4 votes) · LW · GW
Okay, but the reason you think AI safety/x-risk is important is because twenty years ago, people like Eliezer Yudkowsky and Nick Bostrom were trying to do systematically correct reasoning about the future, noticed that the alignment problem looked really important, and followed that line of reasoning where it took them—even though it probably looked "tainted" to the serious academics of the time. (The robot apocalypse is nigh? Pftt, sounds like science fiction.)

Those subjects were always obviously potentially important, so I don't see this as evidence against a policy of picking one's battles by only arguing for unpopular truths that are obviously potentially important.

Comment by steven0461 on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-28T23:16:27.836Z · score: 11 (3 votes) · LW · GW
"Take it to /r/TheMotte, you guys" is not that onerous of a demand, and it's a demand I'm happy to support

I'd agree having political discussions in some other designated place online is much less harmful than having them here, but on the other hand, a quick look at what's being posted on the Motte doesn't support the idea that rationalist politics discussion has any importance for sanity on more general topics. If none of it had been posted, as far as I can tell, the rationalist community wouldn't have been any more wrong on any major issue.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-05T03:20:38.975Z · score: 6 (3 votes) · LW · GW

Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don't always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one's side, and is even more importantly different from anything similar to what pops into people's minds when they hear "psychological manipulation". If I'm worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it's good to press hypertech buttons under because they've always vaguely heard that set of thoughts is disreputable and so never looked into it, I don't think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let's still talk some time.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-05T00:01:45.377Z · score: 5 (4 votes) · LW · GW

Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-04T21:30:52.997Z · score: 14 (4 votes) · LW · GW
we need to figure out how to think together

This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we're limited by temperament rather than understanding. I agree that if we're trying to think about how to think together we can treat no censorship as the default case.

worthless cowards

If cowardice means fear of personal consequences, this doesn't ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don't do it is because I'd feel guilt about harming the discourse. This motivation doesn't disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.

who just assume as if it were a law of nature that discourse is impossible

I don't know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.

But what if you actually need common knowledge for something?

Then that's a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-03T18:52:52.862Z · score: 2 (1 votes) · LW · GW
The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself.

Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it's better to get people to realize that public discourse isn't going to contain all the arguments than to get them to include all the arguments in public discourse.

Comment by steven0461 on Deleted · 2019-10-25T20:25:10.715Z · score: 5 (2 votes) · LW · GW

Expressing unpopular opinions can be good and necessary, but doing so merely because someone asked you to is foolish. Have some strategic common sense.

Comment by steven0461 on Deleted · 2019-10-25T20:14:52.400Z · score: 12 (6 votes) · LW · GW

(c) unpopular ideas hurt each other by association, (d) it's hard to find people who can be trusted to have good unpopular ideas but not bad unpopular ideas, (e) people are motivated by getting credit for their ideas, (f) people don't seem good at group writing curation generally

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-15T17:38:39.211Z · score: 2 (1 votes) · LW · GW

Even if you assume no climate policy at all and if you make various other highly pessimistic assumptions about the economy (RCP 8.5), I think it's still far under 10% conditional on those assumptions, though it's tricky to extract this kind of estimate.

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-13T21:37:21.744Z · score: 2 (1 votes) · LW · GW
We're predicting it to be as high as a 6°C warming by 2100, so it's actually a huge fluctuation.

6°C is something like a worst case scenario.


Comment by steven0461 on What are the best resources for examining the evidence for anthropogenic climate change? · 2019-08-07T02:04:19.092Z · score: 4 (3 votes) · LW · GW

The question you should ask for policy purposes is how much the temperature would rise in response to different possible increases in CO2. It's basically a matter of estimating a continuous parameter that nobody thinks is zero and whose space of possible values has no natural dividing line between "yes" and "no". Attribution of past warming partly overlaps with the "how much" question and partly just distracts from it. That said, I would just read the relevant sections of the latest IPCC report.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-11T20:31:30.483Z · score: 6 (3 votes) · LW · GW

Online posts function as hard-to-fake signals of readiness to invest verbal energy into arguing for one side of an issue. This gives readers the feeling they won't lose face if they adopt the post's opinion, which overlaps with the feeling that the post's opinion is true. This function sometimes makes posts longer than would be socially optimal.

Comment by steven0461 on FB/Discord Style Reacts · 2019-07-04T17:05:35.533Z · score: 7 (4 votes) · LW · GW

"This is wrong, harmful, and/or in bad faith, but I expect arguing this point against determined verbally clever opposition would be too costly."

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-03T05:18:54.104Z · score: 3 (2 votes) · LW · GW

I guess I wasn't necessarily thinking of them as exact duplicates. If there are 10^100 ways the 21st century can go, and for some reason each of the resulting civilizations wants to know how all the other civilizations came out when the dust settled, each civilization ends up having a lot of other civilizations to think about. In this scenario, an effect on the far future still seems to me to be "only" a million times as big as the same effect on the 21st century, only now the stuff isomorphic to the 21st century is spread out across many different far future civilizations instead of one.

Maybe 1/1,000,000 is still a lot, but I'm not sure how to deal with uncertainty here. If I just take the expectation of the fraction of the universe isomorphic to the 21st century, I might end up with some number like 1/10,000,000 (because I'm 10% sure of the 1/1,000,000 claim) and still conclude the relative importance of the far future is huge but hugely below infinity.

Comment by steven0461 on Being Wrong Doesn't Mean You're Stupid and Bad (Probably) · 2019-07-01T20:52:15.965Z · score: 13 (5 votes) · LW · GW

If you don't just learn what someone's opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they're stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:11:33.739Z · score: 3 (2 votes) · LW · GW

How much should I worry about the unilateralist's curse when making arguments that it seems like some people should have already thought of and that they might have avoided making because they anticipated side effects that I don't understand?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:06:55.701Z · score: 2 (1 votes) · LW · GW
based on the details of the estimates that doesn't look to me like it's just bad luck

For example:

  • There's a question about whether the S&P 500 will end the year higher than it began. When the question closed, the index had increased from 2500 to 2750. The index has increased most years historically. But the Metaculus estimate was about 50%.
  • On this question, at the time of closing, 538's estimate was 99+% and the Metaculus estimate was 66%. I don't think Metaculus had significantly different information than 538.
Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:57:43.606Z · score: 3 (2 votes) · LW · GW

A naive argument says the influence of our actions on the far future is ~infinity times as intrinsically important as the influence of our actions on the 21st century because the far future contains ~infinity times as much stuff. One limit to this argument is that if 1/1,000,000 of the far future stuff is isomorphic to the 21st century (e.g. simulations), then having an influence on the far future is "only" a million times as important as having the exact same influence on the 21st century. (Of course, the far future is a very different place so our influence will actually be of a very different nature.) Has anyone tried to get a better abstract understanding of this point or tried to quantify how much it matters in practice?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:11:32.633Z · score: 7 (3 votes) · LW · GW

Newcomb's Problem sometimes assumes Omega is right 99% of the time. What is that conditional on? If it's just a base rate (Omega is right about 99% of people), what happens when you condition on having particular thoughts and modeling the problem on a particular level? (Maybe there exists a two-boxing lesion and you can become confident you don't have it.) If it's 99% conditional on anything you might think, e.g. because Omega has a full model of you but gets hit by a cosmic ray 1% of the time, isn't it clearer to just assume Omega gets it 100% right? Is this explained somewhere?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:07:59.869Z · score: 2 (1 votes) · LW · GW

I think one could greatly outperform the best publicly available forecasts through collaboration between 1) some people good at arguing and looking for info and 2) someone good at evaluating arguments and aggregating evidence. Maybe just a forum thread where a moderator keeps a percentage estimate updated in the top post.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T17:01:35.959Z · score: 5 (3 votes) · LW · GW

I would normally trust it more, but it's recently been doing way worse than the Metaculus crowd median (average log score 0.157 vs 0.117 over the sample of 20 yes/no questions that have resolved for me), and based on the details of the estimates that doesn't look to me like it's just bad luck. It does better on the whole set of questions, but I think still not much better than the median; I can't find the analysis page at the moment.

Comment by steven0461 on steven0461's Shortform Feed · 2019-06-30T02:46:07.865Z · score: 9 (5 votes) · LW · GW

Considering how much people talk about superforecasters, how come there aren't more public sources of superforecasts? There's prediction markets and sites like ElectionBettingOdds that make it easier to read their odds as probabilities, but only for limited questions. There's Metaculus, but it only shows a crowd median (with a histogram of predictions) and in some cases the result of an aggregation algorithm that I don't trust very much. There's PredictionBook, but it's not obvious how to extract a good single probability estimate from it. Both prediction markets and Metaculus are competitive and disincentivize public cooperation. What else is there if I want to know something like what the probability of war with Iran is?

Comment by steven0461 on Writing children's picture books · 2019-06-27T22:35:21.874Z · score: 9 (4 votes) · LW · GW
how much of the current population would end up underwater if they didn’t move

(and if they didn't adapt in other ways, like by building sea walls)

Comment by steven0461 on Writing children's picture books · 2019-06-27T20:18:33.986Z · score: 9 (2 votes) · LW · GW
I think I’ve heard that, with substantial mitigation effort, the temperature difference might be 2 degrees Celsius from now until the end of the century.

Usually people mean from pre-industrial times, not from now. 2 degrees from pre-industrial times means about 1 degree from now.