Posts

steven0461's Shortform Feed 2019-06-30T02:42:13.858Z · score: 36 (7 votes)
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z · score: 28 (19 votes)
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z · score: 0 (1 votes)
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z · score: 1 (2 votes)
Meetup : Amsterdam 2013-11-12T09:12:31.710Z · score: 4 (5 votes)
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z · score: 48 (48 votes)
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z · score: 3 (4 votes)
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z · score: 2 (3 votes)
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z · score: 6 (7 votes)
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z · score: 8 (9 votes)
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z · score: 10 (11 votes)
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z · score: 9 (10 votes)
Swimming in Reasons 2010-04-10T01:24:27.787Z · score: 17 (18 votes)
Disambiguating Doom 2010-03-29T18:14:12.075Z · score: 16 (17 votes)
Taking Occam Seriously 2009-05-29T17:31:52.268Z · score: 28 (28 votes)
Open Thread: May 2009 2009-05-01T16:16:35.156Z · score: 4 (5 votes)
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z · score: 149 (225 votes)
The Wrath of Kahneman 2009-03-09T12:52:41.695Z · score: 25 (26 votes)
Lies and Secrets 2009-03-08T14:43:22.152Z · score: 14 (25 votes)

Comments

Comment by steven0461 on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T17:57:01.678Z · score: 3 (2 votes) · LW · GW

I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:46:13.791Z · score: 4 (2 votes) · LW · GW
Maybe restrict viewing to people with enough less wrong karma.

This is much better than nothing, but it would be much better still for a trusted person to hand-pick people who have strongly demonstrated both the ability to avoid posting pointlessly disreputable material and the unwillingness to use such material in reputational attacks.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:35:31.586Z · score: 4 (2 votes) · LW · GW

I wonder what would happen if a forum had a GPT bot making half the posts, for plausible deniability. (It would probably make things worse. I'm not sure.)

Comment by steven0461 on steven0461's Shortform Feed · 2020-06-24T17:24:05.996Z · score: 4 (2 votes) · LW · GW

There's been some discussion of tradeoffs between a group's ability to think together and its safety from reputational attacks. Both of these seem pretty essential to me, so I wish we'd move in the direction of a third option: recognizing public discourse on fraught topics as unavoidably farcical as well as often useless, moving away from the social norm of acting as if a consideration exists if and only if there's a legible Post about it, building common knowledge of rationality and strategic caution among small groups, and in general becoming skilled at being esoteric without being dishonest or going crazy in ways that would have been kept in check by larger audiences. I think people underrate this approach because they understandably want to be thought gladiators flying truth as a flag. I'm more confident of the claim that we should frequently acknowledge the limits of public discourse than the other claims here.

Comment by steven0461 on Superexponential Historic Growth, by David Roodman · 2020-06-17T00:10:43.523Z · score: 13 (4 votes) · LW · GW

The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.

(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-03-02T21:22:41.536Z · score: 3 (2 votes) · LW · GW
Even in private, in today's environment I'd be afraid to talk about some of the object-level things because I can't be sure you're not a true believer in some of those issues and try to "cancel" me for my positions or even my uncertainties.

This seems like a problem we could mitigate with the right kinds of information exchange. E.g., I'd probably be willing to make a "no canceling anyone" promise depending on wording. Creating networks of trust around this is part of what I meant by "epistemic prepping" upthread.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T01:57:28.737Z · score: 2 (1 votes) · LW · GW

I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T00:37:34.958Z · score: 6 (3 votes) · LW · GW

If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.

(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-18T19:18:38.789Z · score: 2 (1 votes) · LW · GW

Probably it makes more sense to prepare for scenarios where ideological fanaticism is widespread but isn't wielding government power.

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-17T21:35:40.366Z · score: 12 (5 votes) · LW · GW

I think it makes sense to take an "epistemic prepper" perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic "fire alarm"?

Comment by steven0461 on Preliminary thoughts on moral weight · 2020-01-13T20:07:50.718Z · score: 4 (2 votes) · LW · GW

As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.

Comment by steven0461 on Are "superforecasters" a real phenomenon? · 2020-01-11T19:16:19.068Z · score: 4 (2 votes) · LW · GW

From participating on Metaculus I certainly don't get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there's a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That's substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don't know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word "superforecasters" has no content outside of "the best forecasters" and is just there to make the field of research sound more exciting.

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T20:56:37.176Z · score: 12 (3 votes) · LW · GW
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?

Maybe not; probably; yes.

Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)

Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)

Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).

Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.

"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn't like it.

It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!

I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.

If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.

(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-05T18:20:21.593Z · score: 11 (3 votes) · LW · GW

Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.

Comment by steven0461 on Since figuring out human values is hard, what about, say, monkey values? · 2020-01-01T23:25:25.465Z · score: 1 (1 votes) · LW · GW

Given that animals don't act like expected utility maximizers, what do you mean when you talk about their values? For humans, you can ground a definition of "true values" in philosophical reflection (and reflection about how that reflection relates to their true values, and so on), but non-human animals can't do philosophy.

Comment by steven0461 on Don't Double-Crux With Suicide Rock · 2020-01-01T20:51:47.651Z · score: 27 (9 votes) · LW · GW

Honest rational agents can still disagree if the fact that they're all honest and rational isn't common knowledge.

Comment by steven0461 on Speaking Truth to Power Is a Schelling Point · 2019-12-30T21:23:26.134Z · score: 28 (8 votes) · LW · GW

If the slope is so slippery, how come we've been standing on it for over a decade? (Or do you think we're sliding downward at a substantial speed? If so, how can we turn this into a disagreement about concrete predictions about what LW will be like in 5 years?)

Comment by steven0461 on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-28T23:28:18.017Z · score: 9 (3 votes) · LW · GW
Okay, but the reason you think AI safety/x-risk is important is because twenty years ago, people like Eliezer Yudkowsky and Nick Bostrom were trying to do systematically correct reasoning about the future, noticed that the alignment problem looked really important, and followed that line of reasoning where it took them—even though it probably looked "tainted" to the serious academics of the time. (The robot apocalypse is nigh? Pftt, sounds like science fiction.)

Those subjects were always obviously potentially important, so I don't see this as evidence against a policy of picking one's battles by only arguing for unpopular truths that are obviously potentially important.

Comment by steven0461 on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-28T23:16:27.836Z · score: 11 (3 votes) · LW · GW
"Take it to /r/TheMotte, you guys" is not that onerous of a demand, and it's a demand I'm happy to support

I'd agree having political discussions in some other designated place online is much less harmful than having them here, but on the other hand, a quick look at what's being posted on the Motte doesn't support the idea that rationalist politics discussion has any importance for sanity on more general topics. If none of it had been posted, as far as I can tell, the rationalist community wouldn't have been any more wrong on any major issue.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-05T03:20:38.975Z · score: 6 (3 votes) · LW · GW

Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don't always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one's side, and is even more importantly different from anything similar to what pops into people's minds when they hear "psychological manipulation". If I'm worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it's good to press hypertech buttons under because they've always vaguely heard that set of thoughts is disreputable and so never looked into it, I don't think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let's still talk some time.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-05T00:01:45.377Z · score: 5 (4 votes) · LW · GW

Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-04T21:30:52.997Z · score: 14 (4 votes) · LW · GW
we need to figure out how to think together

This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we're limited by temperament rather than understanding. I agree that if we're trying to think about how to think together we can treat no censorship as the default case.

worthless cowards

If cowardice means fear of personal consequences, this doesn't ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don't do it is because I'd feel guilt about harming the discourse. This motivation doesn't disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.

who just assume as if it were a law of nature that discourse is impossible

I don't know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.

But what if you actually need common knowledge for something?

Then that's a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.

Comment by steven0461 on Dialogue on Appeals to Consequences · 2019-12-03T18:52:52.862Z · score: 2 (1 votes) · LW · GW
The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself.

Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it's better to get people to realize that public discourse isn't going to contain all the arguments than to get them to include all the arguments in public discourse.

Comment by steven0461 on Deleted · 2019-10-25T20:25:10.715Z · score: 5 (2 votes) · LW · GW

Expressing unpopular opinions can be good and necessary, but doing so merely because someone asked you to is foolish. Have some strategic common sense.

Comment by steven0461 on Deleted · 2019-10-25T20:14:52.400Z · score: 12 (6 votes) · LW · GW

(c) unpopular ideas hurt each other by association, (d) it's hard to find people who can be trusted to have good unpopular ideas but not bad unpopular ideas, (e) people are motivated by getting credit for their ideas, (f) people don't seem good at group writing curation generally

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-15T17:38:39.211Z · score: 2 (1 votes) · LW · GW

Even if you assume no climate policy at all and if you make various other highly pessimistic assumptions about the economy (RCP 8.5), I think it's still far under 10% conditional on those assumptions, though it's tricky to extract this kind of estimate.

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-13T21:37:21.744Z · score: 2 (1 votes) · LW · GW
We're predicting it to be as high as a 6°C warming by 2100, so it's actually a huge fluctuation.

6°C is something like a worst case scenario.


Comment by steven0461 on What are the best resources for examining the evidence for anthropogenic climate change? · 2019-08-07T02:04:19.092Z · score: 4 (3 votes) · LW · GW

The question you should ask for policy purposes is how much the temperature would rise in response to different possible increases in CO2. It's basically a matter of estimating a continuous parameter that nobody thinks is zero and whose space of possible values has no natural dividing line between "yes" and "no". Attribution of past warming partly overlaps with the "how much" question and partly just distracts from it. That said, I would just read the relevant sections of the latest IPCC report.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-11T20:31:30.483Z · score: 6 (3 votes) · LW · GW

Online posts function as hard-to-fake signals of readiness to invest verbal energy into arguing for one side of an issue. This gives readers the feeling they won't lose face if they adopt the post's opinion, which overlaps with the feeling that the post's opinion is true. This function sometimes makes posts longer than would be socially optimal.

Comment by steven0461 on FB/Discord Style Reacts · 2019-07-04T17:05:35.533Z · score: 7 (4 votes) · LW · GW

"This is wrong, harmful, and/or in bad faith, but I expect arguing this point against determined verbally clever opposition would be too costly."

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-03T05:18:54.104Z · score: 3 (2 votes) · LW · GW

I guess I wasn't necessarily thinking of them as exact duplicates. If there are 10^100 ways the 21st century can go, and for some reason each of the resulting civilizations wants to know how all the other civilizations came out when the dust settled, each civilization ends up having a lot of other civilizations to think about. In this scenario, an effect on the far future still seems to me to be "only" a million times as big as the same effect on the 21st century, only now the stuff isomorphic to the 21st century is spread out across many different far future civilizations instead of one.

Maybe 1/1,000,000 is still a lot, but I'm not sure how to deal with uncertainty here. If I just take the expectation of the fraction of the universe isomorphic to the 21st century, I might end up with some number like 1/10,000,000 (because I'm 10% sure of the 1/1,000,000 claim) and still conclude the relative importance of the far future is huge but hugely below infinity.

Comment by steven0461 on Being Wrong Doesn't Mean You're Stupid and Bad (Probably) · 2019-07-01T20:52:15.965Z · score: 13 (5 votes) · LW · GW

If you don't just learn what someone's opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they're stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:11:33.739Z · score: 3 (2 votes) · LW · GW

How much should I worry about the unilateralist's curse when making arguments that it seems like some people should have already thought of and that they might have avoided making because they anticipated side effects that I don't understand?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:06:55.701Z · score: 2 (1 votes) · LW · GW
based on the details of the estimates that doesn't look to me like it's just bad luck

For example:

  • There's a question about whether the S&P 500 will end the year higher than it began. When the question closed, the index had increased from 2500 to 2750. The index has increased most years historically. But the Metaculus estimate was about 50%.
  • On this question, at the time of closing, 538's estimate was 99+% and the Metaculus estimate was 66%. I don't think Metaculus had significantly different information than 538.
Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:57:43.606Z · score: 3 (2 votes) · LW · GW

A naive argument says the influence of our actions on the far future is ~infinity times as intrinsically important as the influence of our actions on the 21st century because the far future contains ~infinity times as much stuff. One limit to this argument is that if 1/1,000,000 of the far future stuff is isomorphic to the 21st century (e.g. simulations), then having an influence on the far future is "only" a million times as important as having the exact same influence on the 21st century. (Of course, the far future is a very different place so our influence will actually be of a very different nature.) Has anyone tried to get a better abstract understanding of this point or tried to quantify how much it matters in practice?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:11:32.633Z · score: 7 (3 votes) · LW · GW

Newcomb's Problem sometimes assumes Omega is right 99% of the time. What is that conditional on? If it's just a base rate (Omega is right about 99% of people), what happens when you condition on having particular thoughts and modeling the problem on a particular level? (Maybe there exists a two-boxing lesion and you can become confident you don't have it.) If it's 99% conditional on anything you might think, e.g. because Omega has a full model of you but gets hit by a cosmic ray 1% of the time, isn't it clearer to just assume Omega gets it 100% right? Is this explained somewhere?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:07:59.869Z · score: 2 (1 votes) · LW · GW

I think one could greatly outperform the best publicly available forecasts through collaboration between 1) some people good at arguing and looking for info and 2) someone good at evaluating arguments and aggregating evidence. Maybe just a forum thread where a moderator keeps a percentage estimate updated in the top post.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T17:01:35.959Z · score: 5 (3 votes) · LW · GW

I would normally trust it more, but it's recently been doing way worse than the Metaculus crowd median (average log score 0.157 vs 0.117 over the sample of 20 yes/no questions that have resolved for me), and based on the details of the estimates that doesn't look to me like it's just bad luck. It does better on the whole set of questions, but I think still not much better than the median; I can't find the analysis page at the moment.

Comment by steven0461 on steven0461's Shortform Feed · 2019-06-30T02:46:07.865Z · score: 9 (5 votes) · LW · GW

Considering how much people talk about superforecasters, how come there aren't more public sources of superforecasts? There's prediction markets and sites like ElectionBettingOdds that make it easier to read their odds as probabilities, but only for limited questions. There's Metaculus, but it only shows a crowd median (with a histogram of predictions) and in some cases the result of an aggregation algorithm that I don't trust very much. There's PredictionBook, but it's not obvious how to extract a good single probability estimate from it. Both prediction markets and Metaculus are competitive and disincentivize public cooperation. What else is there if I want to know something like what the probability of war with Iran is?

Comment by steven0461 on Writing children's picture books · 2019-06-27T22:35:21.874Z · score: 9 (4 votes) · LW · GW
how much of the current population would end up underwater if they didn’t move

(and if they didn't adapt in other ways, like by building sea walls)

Comment by steven0461 on Writing children's picture books · 2019-06-27T20:18:33.986Z · score: 9 (2 votes) · LW · GW
I think I’ve heard that, with substantial mitigation effort, the temperature difference might be 2 degrees Celsius from now until the end of the century.

Usually people mean from pre-industrial times, not from now. 2 degrees from pre-industrial times means about 1 degree from now.

Comment by steven0461 on Should rationality be a movement? · 2019-06-21T19:24:57.750Z · score: 20 (11 votes) · LW · GW
the development of a new 'mental martial art' of systematically correct reasoning

Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.

Comment by steven0461 on Is "physical nondeterminism" a meaningful concept? · 2019-06-19T15:29:19.648Z · score: 5 (3 votes) · LW · GW

What does it mean for a bit to pop into existence? As I see it, if I measure a particle's spin at time t, then it's either timelessly the case that the result is "up" or timelessly the case that the result is "down". Maybe this is an issue of A Theory versus B Theory?

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T22:19:54.220Z · score: 3 (3 votes) · LW · GW

(I agree with this and made the uncle comment before seeing it. Also, my experience wasn't like that most of the time; I think it was mainly that way toward the end of LW 1.0.)

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T22:04:55.626Z · score: 14 (5 votes) · LW · GW

This seems like an argument for the hypothesis that nitpicking is net bad, but not for mr-hire's hypothesis in the great-grandparent comment that nitpicking caused LW 1.0 to have a lot of mediocre content as a second-order effect.

Comment by steven0461 on Is the "business cycle" an actual economic principle? · 2019-06-18T21:38:32.164Z · score: 12 (7 votes) · LW · GW

It's not gambler's fallacy if recessions are caused by something that builds up over time (but is reset during recessions), like a mismatch between two different variables. In that case, more time having passed means there's probably more of that thing, which means there's more force toward a recession. I have no idea if this is what's actually happening, though.

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T21:08:01.794Z · score: 5 (4 votes) · LW · GW

Only if nitpicking (or the resulting lower posting volume, or something like that) demotivates good posters more strongly than it demotivates mediocre posters. If this is true, it requires an explanation. My naive guess would be it demotivates mediocre posters more strongly because they're wrong more often.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T19:18:59.016Z · score: 3 (2 votes) · LW · GW

Unimaginably large amounts of theory can often compensate for small amounts of missing empirical data. I can imagine the possibility that all of our current observations truly underdetermine facts about the universe's future large-scale evolution, but it wouldn't be my default guess.

For what it's worth, my intuition agrees that any superintelligence, even if using an aestivation strategy, would leave behind some sort of easily visible side effects, and that there aren't actually any aestivating aliens out there.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T17:32:48.129Z · score: 5 (3 votes) · LW · GW

The computing resources in one star system are already huge and it's not clear to me that you need more than that to be certain for all practical purposes about both the fate of the universe and how best to control it.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T17:13:30.634Z · score: 4 (2 votes) · LW · GW

That doesn't sound like it would work in UDT or similar decision theories. Maybe in Heat Death world there's one me and a thousand Boltzmann brains with other observations (as per the linked post), and in Big Rip world there's only the one me. If I'm standing outside the universe trying to decide what response to the observation that I'm me would have the best consequences, why shouldn't I just ignore the Boltzmann brains? (This is just re-arguing the controversy of how anthropics works, I guess, but considered by itself this argument seems strong to me.)