guesses: 1. in most cases, children on net detract from other major projects for common-sense time/attention/optionality management reasons (as well as because they sometimes commit people to a world view of relatively slow change), 2. whether to have children isn't each other's business and pressure against doing normal human things like this is net socially harmful (conservatives in particular are alienated by a culture of childlessness, though maybe that's net strategically useful), 3. people conflate 2 with not-1 on an emotional level and feel 1 is false because 2 is true
I agree, of course, that a bad prediction can perform better than a good prediction by luck. That means if you were already sufficiently sure your prediction was good, you can continue to believe it was good after it performs badly. But your belief that the prediction was good then comes from your model of the sources of the competing predictions prior to observing the result (e.g. "PredictIt probably only predicted a higher Trump probability because Trump Trump Trump") instead of from the result itself. The result itself still reflects badly on your prediction. Your prediction may not have been worse, but it performed worse, and that is (perhaps insufficient) Bayesian evidence that it actually was worse. If Nate Silver is claiming something like "sure, our prediction of voter % performed badly compared to PredictIt's implicit prediction of voter %, but we already strongly believed it was good, and therefore still believe it was good, though with less confidence", then I'm fine with that. But that wasn't my impression.
Deviating from the naive view implicitly assumes that confidently predicting a narrow win was too hard to be plausible
I agree I'm making an assumption like "the difference in probability between a 6.5% average poll error and a 5.5% average poll error isn't huge", but I can't conceive of any reason to expect a sudden cliff there instead of a smooth bell curve.
Most closely contested states went to Biden, so vote share is more in Trump's favor than you'd expect based on knowing only who won each state, and PredictIt generally predicted more votes for Trump, so I think PredictIt comes out a lot better than 538 and the Economist.
Data points come in one by one, so it's only natural to ask how each data point affects our estimates of how well different models are doing, separately from how much we trust different models in advance. A lot of the arguments that were made by people who disagreed with Silver were Trump-specific, anyway, making the long-term record less relevant.
It's like taking one of Scott Alexander's 90% bets that went wrong and asking, "do you admit that, if we only consider this particular bet, you would have done better assigning 60% instead?"
If we were observing the results of his bets one by one, and Scott said it was 90% likely and a lot of other people said it was 60% likely, and then it didn't happen, I would totally be happy to say that Scott's model took a hit.
I agree that, if the only two things you consider are (a) the probabilities for a Biden win in 2020, 65% and 89%, and (b) the margin of the win in 2020, then betting markets are a clear winner.
My impression from Silver's internet writings is he hasn't admitted this, but maybe I'm wrong. I haven't seen him admit it and his claim that "we did a good job" suggests he's unwilling to. Betting markets are the clear winner if you look at Silver's predictions about how wrong polls would be, too. That was always the main point of contention. The line he's taking is "we said the polls might be this wrong and that Biden could still win", but obviously it's worse to say that the polls might be that wrong than to say that the polls probably would be that wrong (in that direction), as the markets implicitly did.
Looking at states still throws away information. Trump lost by slightly over a 0.6% margin in the states that he'd have needed to win. The polls were off by slightly under a 6% margin. If those numbers are correct, I don't see how your conclusion about the relative predictive power of 538 and betting markets can be very different from what your conclusion would be if Trump had narrowly won. Obviously if something almost happens, that's normally going to favor a model that assigned 35% to it happening over a model that assigned 10% to it happening. Both Nate Silver and Metaculus users seem to me to be in denial about this.
I don't think there's any shortcut. We'll have to first become rational and honest, and then demonstrate that we're rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.
It's been a while since I looked into it, but my impression was something like "general relativity allows it if you use some sort of exotic matter in a way that isn't clearly possible but isn't clearly crazy". I could imagine that intelligent agents could create such conditions even if nature can't. The Internet Encyclopedia of Philosophy has a decent overview of time travel in general relativity.
Time travel. As I understand it, you don't need to hugely stretch general relativity for closed timelike curves to become possible. If adding a closed timelike curve to the universe adds an extra constraint on the initial conditions of the universe, and makes most possibilities inconsistent, does that morally amount to probably destroying the world? Does it create weird hyper-optimization toward consistency?
I'm pretty sure we can leave this problem to future beings with extremely advanced technology, and more likely than not there are physical reasons why it's not an issue, but I think about it from time to time.
Mostly I only start paying attention to people's opinions on these things once they've demonstrated that they can reason seriously about weird futures, and I don't think I know of any person who's demonstrated this who thinks risk is under, say, 10%. (edit: though I wonder if Robin Hanson counts)
I don't see how the usual rationale for not negotiating with terrorists applies to the food critics case. It's not like your readers are threatening food critics as a punishment to you, with the intent to get you to stop writing. Becoming the kind of agent that stops writing in response to such behavior doesn't create any additional incentives for others to become the kind of agent that is provoked by your writing.
Similarly, it seems to me "don't negotiate with terrorists" doesn't apply in cases where your opponent is harming you, but 1) is non-strategic and 2) was not modified to become non-strategic by an agent with the aim of causing you to give in to them because they're non-strategic. (In cases where you can tell the difference and others know you can tell the difference.)
Even someone who scores terribly on most objective metrics because of e.g. miscalibration can still be insightful if you know how and when to take their claims with a grain of salt. I think making calls on who is a good thinker is always going to require some good judgment, though not as much good judgment as it would take to form an opinion on the issues directly. My sense is there's more returns to be had from aggregating and doing AI/statistics on such judgment calls (and visualizing the results) than from trying to replace the judgment calls with objective metrics.
Yes, maybe I should have used 40% instead of 50%. I've seen Paul Christiano say 10-20% elsewhere. Shah and Ord are part of whom I meant by "other researchers". I'm not sure which of these estimates are conditional on superintelligence being invented. To the extent that they're not, and to the extent that people think superintelligence may not be invented, that means they understate the conditional probability that I'm using here. I think lowish estimates of disaster risks might be more visible than high estimates because of something like social desirability, but who knows.
According to an analysis featured in the recent IPCC special report on 1.5C, reducing all human emissions of greenhouse gases and aerosols to zero immediately would result in a modest short-term bump in global temperatures of around 0.15C as Earth-cooling aerosols disappear, followed by a decline. Around 20 years after emissions went to zero, global temperatures would fall back down below today’s levels and then cool by around 0.25C by 2100.
I.e., if we're at +1.2C today, the maximum would be +1.35C.
For most people, climate change is pretty much the only world-scale issue they've heard of. That makes it very important (in relative terms)
Suppose climate change were like air pollution: greenhouse gas emissions in New York made it hotter in New York but not in Shanghai, and greenhouse gas emissions in Shanghai made it hotter in Shanghai but not in New York. I don't see how that would make it less important.
I mostly agree with Vladimir's comments. My wording may have been over-dramatic. I've been fascinated with these topics and have thought and read a lot about them, and my conclusions have been mostly in the direction of not feeling as much concern, but I think if a narrative like that became legibly a "rationalist meme" like how the many worlds interpretation of quantum mechanics is a "rationalist meme", it could be strategically quite harmful, and at any rate I don't care about it as a subject of activism. On the other hand, I don't want people to be wrong. I've been going back and forth on whether to write a Megapost, but I also have the thing where writing multiple sentences is like pulling teeth; let me know if you have a solution to that one.
I'd be interested to hear what what size of delay you used, and what your reasoning for that was.
I didn't think very hard about it and just eyeballed the graph. Probably a majority of "negligible on this scale" and a minority of "years or (less likely) decades" if we've defined AGI too loosely and the first AGI isn't a huge deal, or things go slowly for some other reason.
Was your main input into this parameter your perceptions of what other people would believe about this parameter?
Yes, but only because those other people seem to make reasonable arguments, so that's kind of like believing it because of the arguments instead of the people. Some vague model of the world is probably also involved, like "avoiding AI x-risk seems like a really hard problem but it's probably doable with enough effort and increasingly many people are taking it very seriously".
If so, I'd be interested to hear whose beliefs you perceive yourself to be deferring to here.
MIRI people and Wei Dai for pessimism (though I'm not sure it's their view that it's worse than 50/50), Paul Christiano and other researchers for optimism.
For my prediction (which I forgot to save as a linkable snapshot before refreshing, oops) roughly what I did was take my distribution for AGI timing (which ended up quite close to the thread average), add an uncertain but probably short delay for a major x-risk factor (probably superintelligence) to appear as a result, weight it by the probability that it turns out badly instead of well (averaging to about 50% because of what seems like a wide range of opinions among reasonable well-informed people, but decreasing over time to represent an increasing chance that we'll know what we're doing), and assume that non-AI risks are pretty unlikely to be existential and don't affect the final picture very much. To an extent, AGI can stand in for highly advanced technology in general.
If I start with a prior where the 2030s and the 2090s are equally likely, it feels kind of wrong to say I have the 7-to-1 evidence for the former that I'd need for this distribution. On the other hand, if I made the same argument for the 2190s and the 2290s, I'd quickly end up with an unreasonable distribution. So I don't know.
some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it's not just the basic problem statement that's important, but potentially many other ideas that aren't yet popular; picking battles isn't lying; arguing about sensitive subjects is fun and I don't think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I'm not against systematically truthseeking discussions of sensitive subjects, just if it's in public in a way that's associated with the rationalism brand
To the extent that it differs from others' predictions, probably the most important factor is that I think even if AGI is hard, there are a number of ways in which human civilization could become capable of doing almost arbitrarily hard things, like through human intelligence enhancement or sufficiently transformative narrow AI. I think that means the question is less about how hard AGI is and more about general futurism than most people think. It's moderately hard for me to imagine how business as usual could go on for the rest of the century, but who knows.
I don't think anyone understands the phrase "rationalist community" as implying a claim that its members don't sometimes allow practical considerations to affect which topics they remain silent on. I don't advocate that people leave out good points merely for being inconvenient to the case they're making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don't start talking about how ugly the judge's children are even if you think the ugliness of the judge's children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge's children are.
I think of my team as being "Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned". I don't think calling it "Team Seek Power For The Greater Good" is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
As I see it, we've had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
I think that negative low-level associations really matter if you're trying to be a mass movement and scale, like a political movement.
Many of the world's smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don't understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.
Maybe restrict viewing to people with enough less wrong karma.
This is much better than nothing, but it would be much better still for a trusted person to hand-pick people who have strongly demonstrated both the ability to avoid posting pointlessly disreputable material and the unwillingness to use such material in reputational attacks.
There's been some discussion of tradeoffs between a group's ability to think together and its safety from reputational attacks. Both of these seem pretty essential to me, so I wish we'd move in the direction of a third option: recognizing public discourse on fraught topics as unavoidably farcical as well as often useless, moving away from the social norm of acting as if a consideration exists if and only if there's a legible Post about it, building common knowledge of rationality and strategic caution among small groups, and in general becoming skilled at being esoteric without being dishonest or going crazy in ways that would have been kept in check by larger audiences. I think people underrate this approach because they understandably want to be thought gladiators flying truth as a flag. I'm more confident of the claim that we should frequently acknowledge the limits of public discourse than the other claims here.
The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.
(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)
Even in private, in today's environment I'd be afraid to talk about some of the object-level things because I can't be sure you're not a true believer in some of those issues and try to "cancel" me for my positions or even my uncertainties.
This seems like a problem we could mitigate with the right kinds of information exchange. E.g., I'd probably be willing to make a "no canceling anyone" promise depending on wording. Creating networks of trust around this is part of what I meant by "epistemic prepping" upthread.
I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.
If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.
(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)
I think it makes sense to take an "epistemic prepper" perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic "fire alarm"?
From participating on Metaculus I certainly don't get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there's a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That's substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don't know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word "superforecasters" has no content outside of "the best forecasters" and is just there to make the field of research sound more exciting.
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?
Maybe not; probably; yes.
Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)
Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)
Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).
Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.
It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!
I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.
If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.
(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)