steven0461's Shortform Feed 2019-06-30T02:42:13.858Z
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z
Meetup : Amsterdam 2013-11-12T09:12:31.710Z
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z
Swimming in Reasons 2010-04-10T01:24:27.787Z
Disambiguating Doom 2010-03-29T18:14:12.075Z
Taking Occam Seriously 2009-05-29T17:31:52.268Z
Open Thread: May 2009 2009-05-01T16:16:35.156Z
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z
The Wrath of Kahneman 2009-03-09T12:52:41.695Z
Lies and Secrets 2009-03-08T14:43:22.152Z


Comment by steven0461 on steven0461's Shortform Feed · 2021-07-02T19:32:13.099Z · LW · GW

It's complicated. Searching the article for "structural uncertainty" gives 10 results about ways they've tried to deal with it. I'm not super confident that they've dealt with it adequately.

Comment by steven0461 on steven0461's Shortform Feed · 2021-07-02T19:25:04.992Z · LW · GW

There's a meme in EA that climate change is particularly bad because of a nontrivial probability that sensitivity to doubled CO2 is in the extreme upper tail. As far as I can tell, that's mostly not real. This paper seems like a very thorough Bayesian assessment that gives 4.7 K as a 95% upper bound, with values for temperature rise by 2089 quite tightly constrained (Fig 23). I'd guess this is an overestimate based on conservative choices represented by Figs 11, 14, and 18. The 5.7 K 95% upper bound after robustness tests comes from changing the joint prior over feedbacks to create a uniform prior on sensitivity, which as far as I can tell is unjustified. Maybe someone who's better at rhetoric than me should figure out how to frame all this in a way that predictably doesn't make people flip out. I thought I should post it, though.

For forecasting purposes, I'd recommend this and this as well, relevant to the amount of emissions to expect from nature and humans respectively.

Comment by steven0461 on steven0461's Shortform Feed · 2021-06-18T21:56:01.922Z · LW · GW

Thinking out loud about some arguments about AI takeoff continuity:

If a discontinuous takeoff is more likely to be local to a particular agent or closely related set of agents with particular goals, and a continuous takeoff is more likely to be global, that seems like it incentivizes the first agent capable of creating a takeoff to make sure that that takeoff is discontinuous, so that it can reap the benefits of the takeoff being local to that agent. This seems like an argument for expecting a discontinuous takeoff and an important difference with other allegedly analogous technologies.

I have some trouble understanding the "before there are strongly self-improving AIs there will be moderately self-improving AIs" argument for continuity. Is there any reason to think the moderate self-improvement ability won't be exactly what leads to the strong self-improvement ability? Before there's an avalanche, there's probably a smaller avalanche, but maybe the small avalanche is simply identical to the early part of the large avalanche.

Where have these points been discussed in depth?

Comment by steven0461 on steven0461's Shortform Feed · 2021-06-09T19:33:19.146Z · LW · GW

Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus)

Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.

Comment by steven0461 on Survey on AI existential risk scenarios · 2021-06-08T21:06:23.158Z · LW · GW

Define an existential catastrophe due to AI as an existential catastrophe that could have been avoided had humanity's development, deployment or governance of AI been otherwise. This includes cases where:

AI directly causes the catastrophe.

AI is a significant risk factor in the catastrophe, such that no catastrophe would have occurred without the involvement of AI.

Humanity survives but its suboptimal use of AI means that we fall permanently and drastically short of our full potential.

This technically seems to include cases like: AGI is not developed by 2050, and a nuclear war in the year 2050 causes an existential catastrophe, but if an aligned AGI had been developed by then, it would have prevented the nuclear war. I don't know if respondents interpreted it that way.

Comment by steven0461 on steven0461's Shortform Feed · 2021-06-02T00:56:07.899Z · LW · GW

Sorry, I don't think I understand what you mean. There can still be a process that gets the same answer as the long reflection, but with e.g. less suffering or waste of resources, right?

Comment by steven0461 on "Existential risk from AI" survey results · 2021-06-02T00:34:21.311Z · LW · GW

A few of the answers seem really high. I wonder if anyone interpreted the questions as asking for P(loss of value | insufficient alignment research) and P(loss of value | misalignment) despite Note B.

Comment by steven0461 on steven0461's Shortform Feed · 2021-05-26T21:19:13.515Z · LW · GW

I'd like to register skepticism of the idea of a "long reflection". I'd guess any intelligence that knew how to stabilize the world with respect to processes that affect humanity's reflection about its values in undesirable ways (e.g. existential disasters), without also stabilizing it with respect to processes that affect it in desirable ways, would already understand the value extrapolation problem well enough to take a lot of shortcuts in calculating the final answer compared to doing the experiment in real life. (You might call such a calculation a "Hard Reflection".)

Comment by steven0461 on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-13T20:39:45.581Z · LW · GW

Great post, very informative

Step 7 seems like it's already possible given that most research into tissue engineering assumes embryonic stem cells or some other pluripotent stem cells as a starting point.

Typo for "Step 6"?

Comment by steven0461 on Anna and Oliver discuss Children and X-Risk · 2021-02-28T18:51:17.469Z · LW · GW

guesses: 1. in most cases, children on net detract from other major projects for common-sense time/attention/optionality management reasons (as well as because they sometimes commit people to a world view of relatively slow change), 2. whether to have children isn't each other's business and pressure against doing normal human things like this is net socially harmful (conservatives in particular are alienated by a culture of childlessness, though maybe that's net strategically useful), 3. people conflate 2 with not-1 on an emotional level and feel 1 is false because 2 is true

Comment by steven0461 on Poll: Which variables are most strategically relevant? · 2021-01-26T22:09:43.963Z · LW · GW

I would add "will relevant people expect AI to have extreme benefits, such as a significant percentage point reduction in other existential risk or a technological solution to aging"

Comment by steven0461 on Information Charts · 2020-11-13T19:32:35.126Z · LW · GW

I agree, of course, that a bad prediction can perform better than a good prediction by luck. That means if you were already sufficiently sure your prediction was good, you can continue to believe it was good after it performs badly. But your belief that the prediction was good then comes from your model of the sources of the competing predictions prior to observing the result (e.g. "PredictIt probably only predicted a higher Trump probability because Trump Trump Trump") instead of from the result itself. The result itself still reflects badly on your prediction. Your prediction may not have been worse, but it performed worse, and that is (perhaps insufficient) Bayesian evidence that it actually was worse. If Nate Silver is claiming something like "sure, our prediction of voter % performed badly compared to PredictIt's implicit prediction of voter %, but we already strongly believed it was good, and therefore still believe it was good, though with less confidence", then I'm fine with that. But that wasn't my impression.


Deviating from the naive view implicitly assumes that confidently predicting a narrow win was too hard to be plausible

I agree I'm making an assumption like "the difference in probability between a 6.5% average poll error and a 5.5% average poll error isn't huge", but I can't conceive of any reason to expect a sudden cliff there instead of a smooth bell curve.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-11T19:19:35.400Z · LW · GW

Yes, that looks like a crux. I guess I don't see the need to reason about calibration instead of directly about expected log score.

Comment by steven0461 on Scoring 2020 U.S. Presidential Election Predictions · 2020-11-11T19:14:33.026Z · LW · GW

Most closely contested states went to Biden, so vote share is more in Trump's favor than you'd expect based on knowing only who won each state, and PredictIt generally predicted more votes for Trump, so I think PredictIt comes out a lot better than 538 and the Economist.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T21:31:58.943Z · LW · GW

Data points come in one by one, so it's only natural to ask how each data point affects our estimates of how well different models are doing, separately from how much we trust different models in advance. A lot of the arguments that were made by people who disagreed with Silver were Trump-specific, anyway, making the long-term record less relevant.

It's like taking one of Scott Alexander's 90% bets that went wrong and asking, "do you admit that, if we only consider this particular bet, you would have done better assigning 60% instead?"

If we were observing the results of his bets one by one, and Scott said it was 90% likely and a lot of other people said it was 60% likely, and then it didn't happen, I would totally be happy to say that Scott's model took a hit.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T20:57:21.070Z · LW · GW

I agree that, if the only two things you consider are (a) the probabilities for a Biden win in 2020, 65% and 89%, and (b) the margin of the win in 2020, then betting markets are a clear winner.

My impression from Silver's internet writings is he hasn't admitted this, but maybe I'm wrong. I haven't seen him admit it and his claim that "we did a good job" suggests he's unwilling to. Betting markets are the clear winner if you look at Silver's predictions about how wrong polls would be, too. That was always the main point of contention. The line he's taking is "we said the polls might be this wrong and that Biden could still win", but obviously it's worse to say that the polls might be that wrong than to say that the polls probably would be that wrong (in that direction), as the markets implicitly did.

Comment by steven0461 on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T19:33:58.347Z · LW · GW

Looking at states still throws away information. Trump lost by slightly over a 0.6% margin in the states that he'd have needed to win. The polls were off by slightly under a 6% margin. If those numbers are correct, I don't see how your conclusion about the relative predictive power of 538 and betting markets can be very different from what your conclusion would be if Trump had narrowly won. Obviously if something almost happens, that's normally going to favor a model that assigned 35% to it happening over a model that assigned 10% to it happening. Both Nate Silver and Metaculus users seem to me to be in denial about this.

Comment by steven0461 on A Parable of Four Riders · 2020-11-01T18:52:24.342Z · LW · GW

That's their mistake in the case of the fools, but is the claim that they're also making it in the case of the wise men?

Comment by steven0461 on MikkW's Shortform · 2020-11-01T18:50:57.988Z · LW · GW

I don't think there's any shortcut. We'll have to first become rational and honest, and then demonstrate that we're rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.

Comment by steven0461 on A Parable of Four Riders · 2020-10-31T17:28:16.711Z · LW · GW

Is the claim that the superiors are making the same mistake in judging the wise men that they're making in judging the fools?

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-30T21:22:45.713Z · LW · GW

On the other hand, to my knowledge, we haven't thought of any important new technological risks in the past few decades, which is evidence against many such risks existing.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T21:14:45.945Z · LW · GW

It's been a while since I looked into it, but my impression was something like "general relativity allows it if you use some sort of exotic matter in a way that isn't clearly possible but isn't clearly crazy". I could imagine that intelligent agents could create such conditions even if nature can't. The Internet Encyclopedia of Philosophy has a decent overview of time travel in general relativity.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T19:08:20.310Z · LW · GW

Reducing long-term risks from malevolent actors is relevant here.

Comment by steven0461 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T19:03:10.950Z · LW · GW

Time travel. As I understand it, you don't need to hugely stretch general relativity for closed timelike curves to become possible. If adding a closed timelike curve to the universe adds an extra constraint on the initial conditions of the universe, and makes most possibilities inconsistent, does that morally amount to probably destroying the world? Does it create weird hyper-optimization toward consistency?

I'm pretty sure we can leave this problem to future beings with extremely advanced technology, and more likely than not there are physical reasons why it's not an issue, but I think about it from time to time.

Comment by steven0461 on steven0461's Shortform Feed · 2020-10-11T00:02:27.521Z · LW · GW

Has anyone tried to make complex arguments in hypertext form using a tool like Twine? It seems like a way to avoid the usual mess of footnotes and disclaimers.

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-10-08T17:06:56.287Z · LW · GW

Mostly I only start paying attention to people's opinions on these things once they've demonstrated that they can reason seriously about weird futures, and I don't think I know of any person who's demonstrated this who thinks risk is under, say, 10%. (edit: though I wonder if Robin Hanson counts)

Comment by steven0461 on Open Communication in the Days of Malicious Online Actors · 2020-10-08T14:39:20.571Z · LW · GW

I don't see how the usual rationale for not negotiating with terrorists applies to the food critics case. It's not like your readers are threatening food critics as a punishment to you, with the intent to get you to stop writing. Becoming the kind of agent that stops writing in response to such behavior doesn't create any additional incentives for others to become the kind of agent that is provoked by your writing.

Similarly, it seems to me "don't negotiate with terrorists" doesn't apply in cases where your opponent is harming you, but 1) is non-strategic and 2) was not modified to become non-strategic by an agent with the aim of causing you to give in to them because they're non-strategic. (In cases where you can tell the difference and others know you can tell the difference.)

Comment by steven0461 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-08T00:38:29.540Z · LW · GW

Even someone who scores terribly on most objective metrics because of e.g. miscalibration can still be insightful if you know how and when to take their claims with a grain of salt. I think making calls on who is a good thinker is always going to require some good judgment, though not as much good judgment as it would take to form an opinion on the issues directly. My sense is there's more returns to be had from aggregating and doing AI/statistics on such judgment calls (and visualizing the results) than from trying to replace the judgment calls with objective metrics.

Comment by steven0461 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-08T00:28:59.533Z · LW · GW

Relatedly, the term "superforecasting" is already politicized to death in the UK.

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-10-07T23:58:42.376Z · LW · GW

Yes, maybe I should have used 40% instead of 50%. I've seen Paul Christiano say 10-20% elsewhere. Shah and Ord are part of whom I meant by "other researchers". I'm not sure which of these estimates are conditional on superintelligence being invented. To the extent that they're not, and to the extent that people think superintelligence may not be invented, that means they understate the conditional probability that I'm using here. I think lowish estimates of disaster risks might be more visible than high estimates because of something like social desirability, but who knows.

Comment by steven0461 on Rationality and Climate Change · 2020-10-07T17:29:08.834Z · LW · GW

By my understanding, even if we stopped all of our carbon output immediately, there'd still be a devastating 2C increase in the average temperature of the earth.

I don't think this is true:

According to an analysis featured in the recent IPCC special report on 1.5C, reducing all human emissions of greenhouse gases and aerosols to zero immediately would result in a modest short-term bump in global temperatures of around 0.15C as Earth-cooling aerosols disappear, followed by a decline. Around 20 years after emissions went to zero, global temperatures would fall back down below today’s levels and then cool by around 0.25C by 2100.

I.e., if we're at +1.2C today, the maximum would be +1.35C.

Comment by steven0461 on Rationality and Climate Change · 2020-10-07T17:16:06.497Z · LW · GW

For most people, climate change is pretty much the only world-scale issue they've heard of. That makes it very important (in relative terms)

Suppose climate change were like air pollution: greenhouse gas emissions in New York made it hotter in New York but not in Shanghai, and greenhouse gas emissions in Shanghai made it hotter in Shanghai but not in New York. I don't see how that would make it less important.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T17:27:35.304Z · LW · GW

I mostly agree with Vladimir's comments. My wording may have been over-dramatic. I've been fascinated with these topics and have thought and read a lot about them, and my conclusions have been mostly in the direction of not feeling as much concern, but I think if a narrative like that became legibly a "rationalist meme" like how the many worlds interpretation of quantum mechanics is a "rationalist meme", it could be strategically quite harmful, and at any rate I don't care about it as a subject of activism. On the other hand, I don't want people to be wrong. I've been going back and forth on whether to write a Megapost, but I also have the thing where writing multiple sentences is like pulling teeth; let me know if you have a solution to that one.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T02:04:10.587Z · LW · GW

The link says a lot of things, but the basic claim that greenhouse forcing is logarithmic as a function of concentration is as far as I know completely uncontroversial.

Comment by steven0461 on Rationality and Climate Change · 2020-10-06T01:48:49.810Z · LW · GW

I suspect this is one of those cases where the truth is (moderately) outside the Overton window and forcing people to spell this out has the potential to cause great harm to the rationalist community.

Comment by steven0461 on Babble challenge: 50 ways of sending something to the moon · 2020-10-01T19:06:41.133Z · LW · GW
  1. rocket
  2. catapult
  3. cannon
  4. nuclear propelled spaceship
  5. wait for civilization to advance a lot and then just mail the item
  6. throw the item up into the sky and be okay with failure
  7. note that the earth is the sun's moon, you're on the moon now
  8. scan, destroy, rebuild on the moon
  9. put a note on the item promising a bounty to the first person who takes it to the moon
  10. extremely long tube
  11. time travel to a time when the moon was where you are now
  12. gravity manipulation
  13. climb to the top of mount everest and just keep going
  14. hit the item very hard
  15. genetic engineering to become taller until you reach the moon
  16. teleport spell
  17. have the item split in two in opposite directions, repeat until 1/2^n of the thing reaches the moon, repeat 2^n times
  18. survive the death of the sun and wait for the moon to fall to earth
  19. uplift the item to sentience and motivate it to take itself to the moon
  20. ask the lords of the matrix to edit the item's location property
  21. wait for a particularly powerful volcano
  22. magical rope
  23. wait for a full moon, full enough to reach the earth
  24. drag the moon to earth with a rope
  25. wormhole
  26. kill the item, then use necromancy to resurrect it on the moon
  27. telekinesis
  28. attach the item to the fabric of spacetime and move the earth-moon system down until the moon is where you started
  29. start the problem in a state where the item is already on the moon
  30. giant longbow
  31. giant blowgun
  32. uplift the moon to sentience and motivate it to come get the item
  33. shrink the earth and grow the moon until the moon is the earth and the earth is the moon
  34. ask some aliens to abduct the item
  35. attach the item to a long stick, then keep adding new stick parts to the bottom of the stick
  36. meditate until all is one, including the moon and the earth
  37. laser sail
  38. wait for the item to quantum tunnel to the moon by coincidence
  39. wait for the big crunch, when everything will be in the same place, including the item and the moon
  40. make the item very sturdy and keep shooting it with a gun
  41. find some moon dust that astronauts took to earth and put the item on the moon dust
  42. attain godhood and use omnipotence somehow
  43. emit greenhouse gases until sea level rise takes the item to the moon
  44. very tall elevator
  45. very tall escalator
  46. delegate to all other humans so everyone only has to transport the item a few centimeters
  47. attach the item to a balloon filled with a gas lighter than vacuum
  48. gender reveal party
  49. temporarily move all of earth's mountains to the same location
  50. artificial geyser
Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-23T17:40:26.470Z · LW · GW

I'd be interested to hear what what size of delay you used, and what your reasoning for that was.

I didn't think very hard about it and just eyeballed the graph. Probably a majority of "negligible on this scale" and a minority of "years or (less likely) decades" if we've defined AGI too loosely and the first AGI isn't a huge deal, or things go slowly for some other reason.

Was your main input into this parameter your perceptions of what other people would believe about this parameter?

Yes, but only because those other people seem to make reasonable arguments, so that's kind of like believing it because of the arguments instead of the people. Some vague model of the world is probably also involved, like "avoiding AI x-risk seems like a really hard problem but it's probably doable with enough effort and increasingly many people are taking it very seriously".

If so, I'd be interested to hear whose beliefs you perceive yourself to be deferring to here.

MIRI people and Wei Dai for pessimism (though I'm not sure it's their view that it's worse than 50/50), Paul Christiano and other researchers for optimism. 

Comment by steven0461 on Forecasting Thread: Existential Risk · 2020-09-22T17:33:09.240Z · LW · GW

For my prediction (which I forgot to save as a linkable snapshot before refreshing, oops) roughly what I did was take my distribution for AGI timing (which ended up quite close to the thread average), add an uncertain but probably short delay for a major x-risk factor (probably superintelligence) to appear as a result, weight it by the probability that it turns out badly instead of well (averaging to about 50% because of what seems like a wide range of opinions among reasonable well-informed people, but decreasing over time to represent an increasing chance that we'll know what we're doing), and assume that non-AI risks are pretty unlikely to be existential and don't affect the final picture very much. To an extent, AGI can stand in for highly advanced technology in general.

If I start with a prior where the 2030s and the 2090s are equally likely, it feels kind of wrong to say I have the 7-to-1 evidence for the former that I'd need for this distribution. On the other hand, if I made the same argument for the 2190s and the 2290s, I'd quickly end up with an unreasonable distribution. So I don't know.

Comment by steven0461 on Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem · 2020-09-18T20:02:34.886Z · LW · GW

some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it's not just the basic problem statement that's important, but potentially many other ideas that aren't yet popular; picking battles isn't lying; arguing about sensitive subjects is fun and I don't think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I'm not against systematically truthseeking discussions of sensitive subjects, just if it's in public in a way that's associated with the rationalism brand

Comment by steven0461 on Forecasting Thread: AI Timelines · 2020-08-24T05:45:21.411Z · LW · GW

Here's my prediction:

To the extent that it differs from others' predictions, probably the most important factor is that I think even if AGI is hard, there are a number of ways in which human civilization could become capable of doing almost arbitrarily hard things, like through human intelligence enhancement or sufficiently transformative narrow AI. I think that means the question is less about how hard AGI is and more about general futurism than most people think. It's moderately hard for me to imagine how business as usual could go on for the rest of the century, but who knows.

Comment by steven0461 on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-10T18:59:03.205Z · LW · GW

"Acausalism" works, but might be confused with the idea that acausal dependence matters at all, or with other philosophical doctrines that deny causality in some sense.

I'm not sure whether to be located in a place is a different thing from the place subjunctively depending on your behavior.

Some more ideas: "outofreachism" (closest to "longtermism"), "extrauniversalism", "subjunctive dependentism" (hardest to strawman), "elsewherism", "spooky axiology at a distance"

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T21:01:13.632Z · LW · GW

I don't think anyone understands the phrase "rationalist community" as implying a claim that its members don't sometimes allow practical considerations to affect which topics they remain silent on. I don't advocate that people leave out good points merely for being inconvenient to the case they're making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don't start talking about how ugly the judge's children are even if you think the ugliness of the judge's children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge's children are.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-21T17:02:50.284Z · LW · GW

I think of my team as being "Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned". I don't think calling it "Team Seek Power For The Greater Good" is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T16:25:18.497Z · LW · GW

As I see it, we've had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.

Comment by steven0461 on My Dating Plan ala Geoffrey Miller · 2020-07-20T00:59:40.411Z · LW · GW
I think that negative low-level associations really matter if you're trying to be a mass movement and scale, like a political movement.

Many of the world's smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don't understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.

Comment by steven0461 on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T17:57:01.678Z · LW · GW

I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:46:13.791Z · LW · GW
Maybe restrict viewing to people with enough less wrong karma.

This is much better than nothing, but it would be much better still for a trusted person to hand-pick people who have strongly demonstrated both the ability to avoid posting pointlessly disreputable material and the unwillingness to use such material in reputational attacks.

Comment by steven0461 on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T17:35:31.586Z · LW · GW

I wonder what would happen if a forum had a GPT bot making half the posts, for plausible deniability. (It would probably make things worse. I'm not sure.)

Comment by steven0461 on steven0461's Shortform Feed · 2020-06-24T17:24:05.996Z · LW · GW

There's been some discussion of tradeoffs between a group's ability to think together and its safety from reputational attacks. Both of these seem pretty essential to me, so I wish we'd move in the direction of a third option: recognizing public discourse on fraught topics as unavoidably farcical as well as often useless, moving away from the social norm of acting as if a consideration exists if and only if there's a legible Post about it, building common knowledge of rationality and strategic caution among small groups, and in general becoming skilled at being esoteric without being dishonest or going crazy in ways that would have been kept in check by larger audiences. I think people underrate this approach because they understandably want to be thought gladiators flying truth as a flag. I'm more confident of the claim that we should frequently acknowledge the limits of public discourse than the other claims here.

Comment by steven0461 on Superexponential Historic Growth, by David Roodman · 2020-06-17T00:10:43.523Z · LW · GW

The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.

(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)