Posts

steven0461's Shortform Feed 2019-06-30T02:42:13.858Z
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z
Meetup : Amsterdam 2013-11-12T09:12:31.710Z
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z
Swimming in Reasons 2010-04-10T01:24:27.787Z
Disambiguating Doom 2010-03-29T18:14:12.075Z
Taking Occam Seriously 2009-05-29T17:31:52.268Z
Open Thread: May 2009 2009-05-01T16:16:35.156Z
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z
The Wrath of Kahneman 2009-03-09T12:52:41.695Z
Lies and Secrets 2009-03-08T14:43:22.152Z

Comments

Comment by steven0461 on Poll: Which variables are most strategically relevant? · 2021-01-26T22:09:43.963Z · LW · GW

I would add "will relevant people expect AI to have extreme benefits, such as a significant percentage point reduction in other existential risk or a technological solution to aging"

Comment by steven0461 on What is a probability? · 2020-11-13T19:32:35.126Z · LW · GW

I agree, of course, that a bad prediction can perform better than a good prediction by luck. That means if you were already sufficiently sure your prediction was good, you can continue to believe it was good after it performs badly. But your belief that the prediction was good then comes from your model of the sources of the competing predictions prior to observing the result (e.g. "PredictIt probably only predicted a higher Trump probability because Trump Trump Trump") instead of from the result itself. The result itself still reflects badly on your prediction. Your prediction may not have been worse, but it performed worse, and that is (perhaps insufficient) Bayesian evidence that it actually was worse. If Nate Silver is claiming something like "sure, our prediction of voter % performed badly compared to PredictIt's implicit prediction of voter %, but we already strongly believed it was good, and therefore still believe it was good, though with less confidence", then I'm fine with that. But that wasn't my impression.

edit:

Deviating from the naive view implicitly assumes that confidently predicting a narrow win was too hard to be plausible

I agree I'm making an assumption like "the difference in probability between a 6.5% average poll error and a 5.5% average poll error isn't huge", but I can't conceive of any reason to expect a sudden cliff there instead of a smooth bell curve.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-11T19:19:35.400Z · LW · GW

Yes, that looks like a crux. I guess I don't see the need to reason about calibration instead of directly about expected log score.

Comment by steven0461 on Scoring 2020 U.S. Presidential Election Predictions · 2020-11-11T19:14:33.026Z · LW · GW

Most closely contested states went to Biden, so vote share is more in Trump's favor than you'd expect based on knowing only who won each state, and PredictIt generally predicted more votes for Trump, so I think PredictIt comes out a lot better than 538 and the Economist.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T21:31:58.943Z · LW · GW

Data points come in one by one, so it's only natural to ask how each data point affects our estimates of how well different models are doing, separately from how much we trust different models in advance. A lot of the arguments that were made by people who disagreed with Silver were Trump-specific, anyway, making the long-term record less relevant.

It's like taking one of Scott Alexander's 90% bets that went wrong and asking, "do you admit that, if we only consider this particular bet, you would have done better assigning 60% instead?"

If we were observing the results of his bets one by one, and Scott said it was 90% likely and a lot of other people said it was 60% likely, and then it didn't happen, I would totally be happy to say that Scott's model took a hit.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T20:57:21.070Z · LW · GW

I agree that, if the only two things you consider are (a) the probabilities for a Biden win in 2020, 65% and 89%, and (b) the margin of the win in 2020, then betting markets are a clear winner.

My impression from Silver's internet writings is he hasn't admitted this, but maybe I'm wrong. I haven't seen him admit it and his claim that "we did a good job" suggests he's unwilling to. Betting markets are the clear winner if you look at Silver's predictions about how wrong polls would be, too. That was always the main point of contention. The line he's taking is "we said the polls might be this wrong and that Biden could still win", but obviously it's worse to say that the polls might be that wrong than to say that the polls probably would be that wrong (in that direction), as the markets implicitly did.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T19:33:58.347Z · LW · GW

Looking at states still throws away information. Trump lost by slightly over a 0.6% margin in the states that he'd have needed to win. The polls were off by slightly under a 6% margin. If those numbers are correct, I don't see how your conclusion about the relative predictive power of 538 and betting markets can be very different from what your conclusion would be if Trump had narrowly won. Obviously if something almost happens, that's normally going to favor a model that assigned 35% to it happening over a model that assigned 10% to it happening. Both Nate Silver and Metaculus users seem to me to be in denial about this.

Comment by steven0461 on A Parable of Four Riders · 2020-11-01T18:52:24.342Z · LW · GW

That's their mistake in the case of the fools, but is the claim that they're also making it in the case of the wise men?

Comment by steven0461 on MikkW's Shortform · 2020-11-01T18:50:57.988Z · LW · GW

I don't think there's any shortcut. We'll have to first become rational and honest, and then demonstrate that we're rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.

Comment by steven0461 on A Parable of Four Riders · 2020-10-31T17:28:16.711Z · LW · GW

Is the claim that the superiors are making the same mistake in judging the wise men that they're making in judging the fools?

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-30T21:22:45.713Z · LW · GW

On the other hand, to my knowledge, we haven't thought of any important new technological risks in the past few decades, which is evidence against many such risks existing.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T21:14:45.945Z · LW · GW

It's been a while since I looked into it, but my impression was something like "general relativity allows it if you use some sort of exotic matter in a way that isn't clearly possible but isn't clearly crazy". I could imagine that intelligent agents could create such conditions even if nature can't. The Internet Encyclopedia of Philosophy has a decent overview of time travel in general relativity.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T19:08:20.310Z · LW · GW

Reducing long-term risks from malevolent actors is relevant here.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T19:03:10.950Z · LW · GW

Time travel. As I understand it, you don't need to hugely stretch general relativity for closed timelike curves to become possible. If adding a closed timelike curve to the universe adds an extra constraint on the initial conditions of the universe, and makes most possibilities inconsistent, does that morally amount to probably destroying the world? Does it create weird hyper-optimization toward consistency?

I'm pretty sure we can leave this problem to future beings with extremely advanced technology, and more likely than not there are physical reasons why it's not an issue, but I think about it from time to time.

Comment by steven0461 on steven0461's Shortform Feed · 2020-10-11T00:02:27.521Z · LW · GW

Has anyone tried to make complex arguments in hypertext form using a tool like Twine? It seems like a way to avoid the usual mess of footnotes and disclaimers.

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-10-08T17:06:56.287Z · LW · GW

Mostly I only start paying attention to people's opinions on these things once they've demonstrated that they can reason seriously about weird futures, and I don't think I know of any person who's demonstrated this who thinks risk is under, say, 10%. (edit: though I wonder if Robin Hanson counts)

Comment by steven0461 on Open Communication in the Days of Malicious Online Actors · 2020-10-08T14:39:20.571Z · LW · GW

I don't see how the usual rationale for not negotiating with terrorists applies to the food critics case. It's not like your readers are threatening food critics as a punishment to you, with the intent to get you to stop writing. Becoming the kind of agent that stops writing in response to such behavior doesn't create any additional incentives for others to become the kind of agent that is provoked by your writing.

Similarly, it seems to me "don't negotiate with terrorists" doesn't apply in cases where your opponent is harming you, but 1) is non-strategic and 2) was not modified to become non-strategic by an agent with the aim of causing you to give in to them because they're non-strategic. (In cases where you can tell the difference and others know you can tell the difference.)

Comment by steven0461 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-08T00:38:29.540Z · LW · GW

Even someone who scores terribly on most objective metrics because of e.g. miscalibration can still be insightful if you know how and when to take their claims with a grain of salt. I think making calls on who is a good thinker is always going to require some good judgment, though not as much good judgment as it would take to form an opinion on the issues directly. My sense is there's more returns to be had from aggregating and doing AI/statistics on such judgment calls (and visualizing the results) than from trying to replace the judgment calls with objective metrics.

Comment by steven0461 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-08T00:28:59.533Z · LW · GW

Relatedly, the term "superforecasting" is already politicized to death in the UK.

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-10-07T23:58:42.376Z · LW · GW

Yes, maybe I should have used 40% instead of 50%. I've seen Paul Christiano say 10-20% elsewhere. Shah and Ord are part of whom I meant by "other researchers". I'm not sure which of these estimates are conditional on superintelligence being invented. To the extent that they're not, and to the extent that people think superintelligence may not be invented, that means they understate the conditional probability that I'm using here. I think lowish estimates of disaster risks might be more visible than high estimates because of something like social desirability, but who knows.

Comment by steven0461 on Rationality and Climate Change · 2020-10-07T17:29:08.834Z · LW · GW

By my understanding, even if we stopped all of our carbon output immediately, there'd still be a devastating 2C increase in the average temperature of the earth.

I don't think this is true:

According to an analysis featured in the recent IPCC special report on 1.5C, reducing all human emissions of greenhouse gases and aerosols to zero immediately would result in a modest short-term bump in global temperatures of around 0.15C as Earth-cooling aerosols disappear, followed by a decline. Around 20 years after emissions went to zero, global temperatures would fall back down below today’s levels and then cool by around 0.25C by 2100.

I.e., if we're at +1.2C today, the maximum would be +1.35C.

Comment by steven0461 on Rationality and Climate Change · 2020-10-07T17:16:06.497Z · LW · GW

For most people, climate change is pretty much the only world-scale issue they've heard of. That makes it very important (in relative terms)

Suppose climate change were like air pollution: greenhouse gas emissions in New York made it hotter in New York but not in Shanghai, and greenhouse gas emissions in Shanghai made it hotter in Shanghai but not in New York. I don't see how that would make it less important.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T17:27:35.304Z · LW · GW

I mostly agree with Vladimir's comments. My wording may have been over-dramatic. I've been fascinated with these topics and have thought and read a lot about them, and my conclusions have been mostly in the direction of not feeling as much concern, but I think if a narrative like that became legibly a "rationalist meme" like how the many worlds interpretation of quantum mechanics is a "rationalist meme", it could be strategically quite harmful, and at any rate I don't care about it as a subject of activism. On the other hand, I don't want people to be wrong. I've been going back and forth on whether to write a Megapost, but I also have the thing where writing multiple sentences is like pulling teeth; let me know if you have a solution to that one.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T02:04:10.587Z · LW · GW

The link says a lot of things, but the basic claim that greenhouse forcing is logarithmic as a function of concentration is as far as I know completely uncontroversial.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T01:48:49.810Z · LW · GW

I suspect this is one of those cases where the truth is (moderately) outside the Overton window and forcing people to spell this out has the potential to cause great harm to the rationalist community.

Comment by steven0461 on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T19:06:41.133Z · LW · GW
  1. rocket
  2. catapult
  3. cannon
  4. nuclear propelled spaceship
  5. wait for civilization to advance a lot and then just mail the item
  6. throw the item up into the sky and be okay with failure
  7. note that the earth is the sun's moon, you're on the moon now
  8. scan, destroy, rebuild on the moon
  9. put a note on the item promising a bounty to the first person who takes it to the moon
  10. extremely long tube
  11. time travel to a time when the moon was where you are now
  12. gravity manipulation
  13. climb to the top of mount everest and just keep going
  14. hit the item very hard
  15. genetic engineering to become taller until you reach the moon
  16. teleport spell
  17. have the item split in two in opposite directions, repeat until 1/2^n of the thing reaches the moon, repeat 2^n times
  18. survive the death of the sun and wait for the moon to fall to earth
  19. uplift the item to sentience and motivate it to take itself to the moon
  20. ask the lords of the matrix to edit the item's location property
  21. wait for a particularly powerful volcano
  22. magical rope
  23. wait for a full moon, full enough to reach the earth
  24. drag the moon to earth with a rope
  25. wormhole
  26. kill the item, then use necromancy to resurrect it on the moon
  27. telekinesis
  28. attach the item to the fabric of spacetime and move the earth-moon system down until the moon is where you started
  29. start the problem in a state where the item is already on the moon
  30. giant longbow
  31. giant blowgun
  32. uplift the moon to sentience and motivate it to come get the item
  33. shrink the earth and grow the moon until the moon is the earth and the earth is the moon
  34. ask some aliens to abduct the item
  35. attach the item to a long stick, then keep adding new stick parts to the bottom of the stick
  36. meditate until all is one, including the moon and the earth
  37. laser sail
  38. wait for the item to quantum tunnel to the moon by coincidence
  39. wait for the big crunch, when everything will be in the same place, including the item and the moon
  40. make the item very sturdy and keep shooting it with a gun
  41. find some moon dust that astronauts took to earth and put the item on the moon dust
  42. attain godhood and use omnipotence somehow
  43. emit greenhouse gases until sea level rise takes the item to the moon
  44. very tall elevator
  45. very tall escalator
  46. delegate to all other humans so everyone only has to transport the item a few centimeters
  47. attach the item to a balloon filled with a gas lighter than vacuum
  48. gender reveal party
  49. temporarily move all of earth's mountains to the same location
  50. artificial geyser
Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-23T17:40:26.470Z · LW · GW

I'd be interested to hear what what size of delay you used, and what your reasoning for that was.

I didn't think very hard about it and just eyeballed the graph. Probably a majority of "negligible on this scale" and a minority of "years or (less likely) decades" if we've defined AGI too loosely and the first AGI isn't a huge deal, or things go slowly for some other reason.

Was your main input into this parameter your perceptions of what other people would believe about this parameter?

Yes, but only because those other people seem to make reasonable arguments, so that's kind of like believing it because of the arguments instead of the people. Some vague model of the world is probably also involved, like "avoiding AI x-risk seems like a really hard problem but it's probably doable with enough effort and increasingly many people are taking it very seriously".

If so, I'd be interested to hear whose beliefs you perceive yourself to be deferring to here.

MIRI people and Wei Dai for pessimism (though I'm not sure it's their view that it's worse than 50/50), Paul Christiano and other researchers for optimism. 

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-22T17:33:09.240Z · LW · GW

For my prediction (which I forgot to save as a linkable snapshot before refreshing, oops) roughly what I did was take my distribution for AGI timing (which ended up quite close to the thread average), add an uncertain but probably short delay for a major x-risk factor (probably superintelligence) to appear as a result, weight it by the probability that it turns out badly instead of well (averaging to about 50% because of what seems like a wide range of opinions among reasonable well-informed people, but decreasing over time to represent an increasing chance that we'll know what we're doing), and assume that non-AI risks are pretty unlikely to be existential and don't affect the final picture very much. To an extent, AGI can stand in for highly advanced technology in general.

If I start with a prior where the 2030s and the 2090s are equally likely, it feels kind of wrong to say I have the 7-to-1 evidence for the former that I'd need for this distribution. On the other hand, if I made the same argument for the 2190s and the 2290s, I'd quickly end up with an unreasonable distribution. So I don't know.

Comment by steven0461 on Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem · 2020-09-18T20:02:34.886Z · LW · GW

some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it's not just the basic problem statement that's important, but potentially many other ideas that aren't yet popular; picking battles isn't lying; arguing about sensitive subjects is fun and I don't think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I'm not against systematically truthseeking discussions of sensitive subjects, just if it's in public in a way that's associated with the rationalism brand

Comment by steven0461 on Forecasting Thread: AI Timelines · 2020-08-24T05:45:21.411Z · LW · GW

Here's my prediction:

To the extent that it differs from others' predictions, probably the most important factor is that I think even if AGI is hard, there are a number of ways in which human civilization could become capable of doing almost arbitrarily hard things, like through human intelligence enhancement or sufficiently transformative narrow AI. I think that means the question is less about how hard AGI is and more about general futurism than most people think. It's moderately hard for me to imagine how business as usual could go on for the rest of the century, but who knows.

Comment by steven0461 on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-10T18:59:03.205Z · LW · GW

"Acausalism" works, but might be confused with the idea that acausal dependence matters at all, or with other philosophical doctrines that deny causality in some sense.

I'm not sure whether to be located in a place is a different thing from the place subjunctively depending on your behavior.

Some more ideas: "outofreachism" (closest to "longtermism"), "extrauniversalism", "subjunctive dependentism" (hardest to strawman), "elsewherism", "spooky axiology at a distance"

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T21:01:13.632Z · LW · GW

I don't think anyone understands the phrase "rationalist community" as implying a claim that its members don't sometimes allow practical considerations to affect which topics they remain silent on. I don't advocate that people leave out good points merely for being inconvenient to the case they're making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don't start talking about how ugly the judge's children are even if you think the ugliness of the judge's children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge's children are.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T17:02:50.284Z · LW · GW

I think of my team as being "Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned". I don't think calling it "Team Seek Power For The Greater Good" is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T16:25:18.497Z · LW · GW

As I see it, we've had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T00:59:40.411Z · LW · GW
I think that negative low-level associations really matter if you're trying to be a mass movement and scale, like a political movement.

Many of the world's smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don't understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.

Comment by steven0461 on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T17:57:01.678Z · LW · GW

I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:46:13.791Z · LW · GW
Maybe restrict viewing to people with enough less wrong karma.

This is much better than nothing, but it would be much better still for a trusted person to hand-pick people who have strongly demonstrated both the ability to avoid posting pointlessly disreputable material and the unwillingness to use such material in reputational attacks.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:35:31.586Z · LW · GW

I wonder what would happen if a forum had a GPT bot making half the posts, for plausible deniability. (It would probably make things worse. I'm not sure.)

Comment by steven0461 on steven0461's Shortform Feed · 2020-06-24T17:24:05.996Z · LW · GW

There's been some discussion of tradeoffs between a group's ability to think together and its safety from reputational attacks. Both of these seem pretty essential to me, so I wish we'd move in the direction of a third option: recognizing public discourse on fraught topics as unavoidably farcical as well as often useless, moving away from the social norm of acting as if a consideration exists if and only if there's a legible Post about it, building common knowledge of rationality and strategic caution among small groups, and in general becoming skilled at being esoteric without being dishonest or going crazy in ways that would have been kept in check by larger audiences. I think people underrate this approach because they understandably want to be thought gladiators flying truth as a flag. I'm more confident of the claim that we should frequently acknowledge the limits of public discourse than the other claims here.

Comment by steven0461 on Superexponential Historic Growth, by David Roodman · 2020-06-17T00:10:43.523Z · LW · GW

The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.

(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-03-02T21:22:41.536Z · LW · GW
Even in private, in today's environment I'd be afraid to talk about some of the object-level things because I can't be sure you're not a true believer in some of those issues and try to "cancel" me for my positions or even my uncertainties.

This seems like a problem we could mitigate with the right kinds of information exchange. E.g., I'd probably be willing to make a "no canceling anyone" promise depending on wording. Creating networks of trust around this is part of what I meant by "epistemic prepping" upthread.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T01:57:28.737Z · LW · GW

I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.

Comment by steven0461 on Open & Welcome Thread - February 2020 · 2020-02-28T00:37:34.958Z · LW · GW

If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.

(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-18T19:18:38.789Z · LW · GW

Probably it makes more sense to prepare for scenarios where ideological fanaticism is widespread but isn't wielding government power.

Comment by steven0461 on Have epistemic conditions always been this bad? · 2020-02-17T21:35:40.366Z · LW · GW

I think it makes sense to take an "epistemic prepper" perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic "fire alarm"?

Comment by steven0461 on Preliminary thoughts on moral weight · 2020-01-13T20:07:50.718Z · LW · GW

As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.

Comment by steven0461 on Are "superforecasters" a real phenomenon? · 2020-01-11T19:16:19.068Z · LW · GW

From participating on Metaculus I certainly don't get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there's a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That's substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don't know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word "superforecasters" has no content outside of "the best forecasters" and is just there to make the field of research sound more exciting.

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T20:56:37.176Z · LW · GW
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?

Maybe not; probably; yes.

Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)

Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)

Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).

Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.

"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn't like it.

It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!

I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.

If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.

(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)

Comment by steven0461 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-05T18:20:21.593Z · LW · GW

Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.

Comment by steven0461 on Since figuring out human values is hard, what about, say, monkey values? · 2020-01-01T23:25:25.465Z · LW · GW

Given that animals don't act like expected utility maximizers, what do you mean when you talk about their values? For humans, you can ground a definition of "true values" in philosophical reflection (and reflection about how that reflection relates to their true values, and so on), but non-human animals can't do philosophy.