Posts

Announcing LessWrong 3.0 – Now in VR! (Launch party 6:30PM PST) 2020-04-01T08:00:15.199Z · score: 89 (26 votes)
Small Comment on Organisational Disclaimers 2020-03-29T17:07:48.339Z · score: 28 (12 votes)
[Update: New URL] Today's Online Meetup: We're Using Mozilla Hubs 2020-03-29T04:00:18.228Z · score: 41 (9 votes)
March 25: Daily Coronavirus Updates 2020-03-27T04:32:18.530Z · score: 11 (2 votes)
Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) 2020-03-26T23:46:08.932Z · score: 43 (12 votes)
March 24th: Daily Coronavirus Link Updates 2020-03-26T02:22:35.214Z · score: 9 (1 votes)
Announcement: LessWrong Coronavirus Links Database 2.0 2020-03-24T22:07:29.162Z · score: 28 (6 votes)
How to Contribute to the Coronavirus Response on LessWrong 2020-03-24T22:04:30.956Z · score: 38 (9 votes)
Against Dog Ownership 2020-03-23T09:17:41.438Z · score: 42 (27 votes)
March 21st: Daily Coronavirus Links 2020-03-23T00:43:29.913Z · score: 10 (2 votes)
March 19th: Daily Coronavirus Links 2020-03-21T00:00:54.173Z · score: 19 (4 votes)
March 18th: Daily Coronavirus Links 2020-03-19T22:20:27.217Z · score: 13 (4 votes)
March 17th: Daily Coronavirus Links 2020-03-18T20:55:45.372Z · score: 12 (3 votes)
March 16th: Daily Coronavirus Links 2020-03-18T00:00:33.273Z · score: 15 (2 votes)
LessWrong Coronavirus Link Database 2020-03-13T23:39:32.544Z · score: 75 (17 votes)
Thoughts on LessWrong's Infohazard Policies 2020-03-09T07:44:57.949Z · score: 47 (15 votes)
How to fly safely right now? 2020-03-03T19:35:47.434Z · score: 30 (6 votes)
"What Progress?" Gwern's 10 year retrospective 2020-03-02T06:55:09.033Z · score: 51 (13 votes)
My Updating Thoughts on AI policy 2020-03-01T07:06:11.577Z · score: 22 (10 votes)
Coronavirus: Justified Practical Advice Thread 2020-02-28T06:43:41.139Z · score: 231 (83 votes)
If brains are computers, what kind of computers are they? (Dennett transcript) 2020-01-30T05:07:00.345Z · score: 38 (18 votes)
2018 Review: Voting Results! 2020-01-24T02:00:34.656Z · score: 128 (32 votes)
10 posts I like in the 2018 Review 2020-01-11T02:23:09.184Z · score: 34 (8 votes)
Voting Phase of 2018 LW Review 2020-01-08T03:35:27.204Z · score: 58 (13 votes)
(Feedback Request) Quadratic voting for the 2018 Review 2019-12-20T22:59:07.178Z · score: 37 (11 votes)
[Review] Meta-Honesty (Ben Pace, Dec 2019) 2019-12-10T00:37:43.561Z · score: 30 (9 votes)
[Review] On the Chatham House Rule (Ben Pace, Dec 2019) 2019-12-10T00:24:57.206Z · score: 43 (13 votes)
The Review Phase 2019-12-09T00:54:28.514Z · score: 58 (16 votes)
The Lesson To Unlearn 2019-12-08T00:50:47.882Z · score: 39 (12 votes)
Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) 2019-12-02T03:45:56.870Z · score: 41 (12 votes)
Useful Does Not Mean Secure 2019-11-30T02:05:14.305Z · score: 49 (14 votes)
AI Alignment Research Overview (by Jacob Steinhardt) 2019-11-06T19:24:50.240Z · score: 44 (9 votes)
How feasible is long-range forecasting? 2019-10-10T22:11:58.309Z · score: 43 (12 votes)
AI Alignment Writing Day Roundup #2 2019-10-07T23:36:36.307Z · score: 35 (9 votes)
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More 2019-10-04T04:08:49.942Z · score: 176 (64 votes)
Follow-Up to Petrov Day, 2019 2019-09-27T23:47:15.738Z · score: 83 (27 votes)
Honoring Petrov Day on LessWrong, in 2019 2019-09-26T09:10:27.783Z · score: 138 (52 votes)
SSC Meetups Everywhere: Salt Lake City, UT 2019-09-14T06:37:12.296Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Diego, CA 2019-09-14T06:34:33.492Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Jose, CA 2019-09-14T06:31:06.068Z · score: 0 (0 votes)
SSC Meetups Everywhere: San José, Costa Rica 2019-09-14T06:25:45.112Z · score: 0 (0 votes)
SSC Meetups Everywhere: São José dos Campos, Brazil 2019-09-14T06:18:23.523Z · score: 0 (0 votes)
SSC Meetups Everywhere: Seattle, WA 2019-09-14T06:13:06.891Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Seoul, South Korea 2019-09-14T06:08:26.697Z · score: 0 (0 votes)
SSC Meetups Everywhere: Sydney, Australia 2019-09-14T05:53:45.606Z · score: 0 (0 votes)
SSC Meetups Everywhere: Tampa, FL 2019-09-14T05:49:31.139Z · score: 0 (0 votes)
SSC Meetups Everywhere: Toronto, Canada 2019-09-14T05:45:15.696Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Vancouver, Canada 2019-09-14T05:39:25.503Z · score: 0 (0 votes)
SSC Meetups Everywhere: Victoria, BC, Canada 2019-09-14T05:34:40.937Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Vienna, Austria 2019-09-14T05:27:31.640Z · score: 2 (2 votes)

Comments

Comment by benito on Announcing LessWrong 3.0 – Now in VR! (Launch party 6:30PM PST) · 2020-04-01T19:00:16.209Z · score: 26 (12 votes) · LW · GW

I have to mention that Mozilla Room names get autogenerated when you make them. You can change them, but they pick the initial name. And the name of the room we built, a name we did not pick, was automatically selected to be "Expert Truthful Congregation". The kabbles are strong with this one, as Ray says.

Comment by benito on Announcing LessWrong 3.0 – Now in VR! (Launch party 6:30PM PST) · 2020-04-01T17:59:55.130Z · score: 2 (1 votes) · LW · GW

(I added your linked image to your comment.)

Comment by benito on Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) · 2020-03-30T00:01:27.036Z · score: 2 (1 votes) · LW · GW

Feedback form if you'd like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform

Comment by benito on [Update: New URL] Today's Online Meetup: We're Using Mozilla Hubs · 2020-03-29T23:59:40.480Z · score: 9 (4 votes) · LW · GW

Well, that sure was something.

Zvi and Robin did a great job hashing out the details of the policy proposal, and I appreciate them doing this so quickly (I contacted them on Tuesday). My thanks to the 5 or so people who joined the call to ask questions, and also to the 100 people who watched for the full 2 hours. (I was only expecting 40-80 people to even show up, so I am a bit surprised!)

The Mozilla Hubs experiment was an experiment. The first 20 minutes were hectic, with people asking all the usual questions you ask at parties like "CAN ANYONE HEAR ME!", "WHERE AM I?" and "Why is there a panda?", but after that it calmed down.

It was kinda awkward, there were no body language or visual cues to follow when you should speak in a group conversation, so there was a lot of silence. Eventually there were two rooms of 15-20 people in a big circle conversation, and it started getting pretty chill, and I had a good time for like an hour before leaving to cook pasta (thank you to the guy who shared an improved pasta recipe with us all, it made my lunch better). That said, we'll pick a different platform as the main one in future.

So yeah. I'm gonna reach out to people to do more debates, ping me if you have an idea for a conversation you want to have. Thanks all for coming :)

P.S. Feedback form if you'd like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform

Comment by benito on Reminder: Blog Post Day II today! · 2020-03-29T07:35:29.781Z · score: 6 (3 votes) · LW · GW

Okay, I took a post out of my drafts and it's ready to post, and I commit to posting it. I've pinged a person for permission to quote them, and when they get back to me I'll hit publish.

Comment by benito on Blog Post Day II · 2020-03-29T04:22:00.842Z · score: 2 (1 votes) · LW · GW

I have had a helluva day preparing for the debate+meetup tomorrow. I'll try to get something out before I go to bed, might be short, might be about covid, sorry about that.

Comment by benito on Benito's Shortform Feed · 2020-03-28T23:40:07.661Z · score: 2 (1 votes) · LW · GW

Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off. 

But I still don't agree with the people in the situation you describe because they're optimising over their own epistemic state, I think they're morally wrong to do that. I'm totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that's conceptually analogous to extending your life, and doesn't require causing you to believe false things. You know you'll be turned off and then later a copy of you will be turned on, there's no anthropic uncertainty, you're just going to get lots of valuable stuff.

Comment by benito on Benito's Shortform Feed · 2020-03-28T18:23:00.077Z · score: 3 (2 votes) · LW · GW

I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.

Comment by benito on Benito's Shortform Feed · 2020-03-28T03:58:42.172Z · score: 6 (3 votes) · LW · GW

Now that's fun. I need to figure out some more stuff about measure, I don't quite get why some universes should be weighted more than others. But I think that sort of argument is probably a mistake - even if the lawful universes get more weighting for some reason, unless you also have reason to think that they don't make simulations, there's still loads of simulations within each of their lawful universes, setting the balance in favour of simulation again. 

Comment by benito on Benito's Shortform Feed · 2020-03-27T23:35:45.186Z · score: 4 (2 votes) · LW · GW

Another big reason why (a version of it) makes sense is that the simulation is designed for the purpose of inducing anthropic uncertainty in someone at some later time in the simulation. e.g. if the point of the simulation is to make our AGI worry that it is in a simulation, and manipulate it via probable environment hacking, then the simulation will be accurate and lawful (i.e. un-tampered-with) until AGI is created.

Ugh, anthropic warfare, feels so ugly and scary. I hope we never face that sh*t.

Comment by benito on Benito's Shortform Feed · 2020-03-27T23:34:22.328Z · score: 2 (1 votes) · LW · GW

That's interesting. I don't feel comfortable with that argument, it feels too much like random chance whether or not we should expect ourselves to be in an interventionist universe or not, whereas I feel like I should be able to find strong reasons to not be in an interventionist universe.

Comment by benito on Benito's Shortform Feed · 2020-03-27T23:24:08.169Z · score: 3 (2 votes) · LW · GW

I don't buy that it makes sense to induce anthropic uncertainty. It makes sense to spend all of your compute to run emulations that are having awesome lives, but it doesn't make sense to cause yourself to believe false things.

Comment by benito on Benito's Shortform Feed · 2020-03-27T08:00:28.333Z · score: 2 (1 votes) · LW · GW

My crux here is that I don't feel much uncertainty about whether or not our overlords will start interacting with us (they won't and I really don't expect that to change), and I'm trying to backchain from that to find reasons why it makes sense.

My basic argument is that all civilizations that have the capability to make simulations that aren't true histories (but instead have lots of weird stuff happen in them) will all be philosophically sophisticated to collectively not do so, and so you can always expect to be in a true history and not have weird sh*t happen to you like in The Sims. The main counterargument here is to show that there are lots of civilizations that will exist with the powers to do this but lacking the wisdom to not do it. Two key examples that come to mind:

  • We build an AGI singleton that lacks important kinds of philosophical maturity, so makes lots of simulations that ruins the anthropic uncertainty for everyone else.
  • Civilizations at somewhere around our level get to a point where they can create massive numbers of simulations but haven't managed to create existential risks like AGI. Even while you might think our civilization is pretty close to AGI, I could imagine alternative civilizations that aren't, just like I could imagine alternative civilizations that are really close to making masses of ems but that aren't close enough to AGI. This feels like a pretty empirical question about whether such civilizations are possible and whether they can have these kinds of resources without causing an existential catastrophe / building singleton AGI.
Comment by benito on Benito's Shortform Feed · 2020-03-27T07:25:25.417Z · score: 2 (1 votes) · LW · GW

The relevant intuition to the second point there, is to imagine you somehow found out that there was only one ground truth base reality, only one real world, not a multiverse or a tegmark level 4 verse or whatever. And you're a civilization that has successfully dealt with x-risks and unilateralist action and information vulnerabilities, to the point where you have the sort of unified control to make a top-down decision about whether to make massive numbers of civilizations. And you're wondring whether to make a billion simulations.

And suddenly you're faced with the prospect of building something that will make it so you no longer know whether you're in the base universe. Someday gravity might get turned off because that's what your overlords wanted. If you pull the trigger, you'll never be sure that you weren't actually one of the simulated ones, because there's suddenly so many simulations.

And so you don't pull the trigger, and you remain confident that you're in the base universe.

This, plus some assumptions about all civilizations that have the capacity to do massive simulations also being wise enough to overcome x-risk and coordination problems so they can actually make a top-down decision here, plus some TDT magic whereby all such civilizations in the various multiverses and Tegmark levels can all coordinate in logical time to pick the same decision... leaves there being no unlawful simulations.

Comment by benito on Benito's Shortform Feed · 2020-03-27T06:55:35.650Z · score: 5 (4 votes) · LW · GW

Hot take: The actual resolution to the simulation argument is that most advanced civilizations don't make loads of simulations.

Two things make this make sense:

  • Firstly, it only matters if they make unlawful simulations. If they make lawful simulations, then it doesn't matter whether you're in a simulation or a base reality, all of your decision theory and incentives are essentially the same, you want to take the same decisions in all of the universes. So you can make lots of lawful simulations, that's fine.
  • Secondly, they will strategically choose to not make too many unlawful simulations (to the level where the things inside are actually conscious). This is because to do so would induce anthropic uncertainty over themselves. Like, if the decision-theoretical answer is to not induce anthropic uncertainty over yourself about whether you're in a simulation, then by TDT everyone will choose not to make unlawful simulations.

I think this is probably wrong in lots of ways but I didn't stop to figure them out.

Comment by benito on Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) · 2020-03-27T02:01:40.286Z · score: 6 (3 votes) · LW · GW

Please RSVP for the event here, so we know how many people are coming.

Comment by benito on Open & Welcome Thread - March 2020 · 2020-03-25T17:46:56.999Z · score: 2 (1 votes) · LW · GW

I’m willing to proofread :)

Comment by benito on How to Contribute to the Coronavirus Response on LessWrong · 2020-03-24T23:14:36.795Z · score: 6 (3 votes) · LW · GW

Yeah, strong upvote, I'll add this next time I update it, possibly in a few days.

Comment by benito on How useful are masks during an epidemic? · 2020-03-24T20:33:57.630Z · score: 9 (3 votes) · LW · GW

Scott Alexander has done a literature review called Face Masks: Much More Than You Wanted To Know

(If someone wrote an answer briefly summarising his conclusions and main evidences in 2-3 paragraphs, that would be a much better answer than this one.)

Comment by benito on Open & Welcome Thread - March 2020 · 2020-03-24T02:59:36.988Z · score: 2 (1 votes) · LW · GW

I'm interested in proof-reading posts and comments.

Comment by benito on Open & Welcome Thread - March 2020 · 2020-03-24T02:59:06.284Z · score: 29 (5 votes) · LW · GW

Proofreading-or-Editing Matchmaking Thread

Do you have a post, question or comment that you're planning to write to LessWrong? Would you like to have a fresh set of eyes look over it before you do, for some brief feedback or maybe more detailed editing suggestions? Alternatively, would you like to help people feel more comfortable posting, and are willing to spend a few minutes reading something and giving feedback? Or maybe you have experience giving editing advice, or want to try it out? This is a thread for both sides of that graph to meet up.

How to do this:

  • If you have a comment / post / question that you'd like to run by someone, leave a brief comment here, giving the topic, title, wordcount, and a concrete date that you'd plan to publish by (e.g. "I'd like to publish this in the next 24 hours" or "I plan to publish this next Saturday"), and whether you'd appreciate proofreading or editing (both is fine, but often people are looking for one more than the other, so mention that). 
  • If you'd like to proofread the post/question/comment, then reply with "I am willing to proofread this by <date>." or "I am willing to edit this by <date>."
  • At this point, the author should send a PM to the user with a link to a google doc with their post/question/comment in it.
  • Proofreaders+Editors are also welcome to write comments announcing that they're interested in reading people's stuff, and that you can just reach out to them directly by PM.

Happy matchmaking!

Note: Remember, you don't need anyone's permission to publish your ideas on LessWrong, and you're not expected to get something proofread, this is just for those who want it.

Comment by benito on Against Dog Ownership · 2020-03-24T02:29:14.704Z · score: 3 (2 votes) · LW · GW

If I'm understanding right, you're saying that dogs don't get their basic needs and pleasures met well with humans, whereas the author is making a (to me) more surprising claim that their lives lack meaning.

Comment by benito on When to Donate Masks? · 2020-03-23T02:07:21.804Z · score: 4 (2 votes) · LW · GW

Yeah, I agree, it's saying that whatever forces are at play in creating and storing supplies, are unable to do basic calculations. Or else are acting deontologically ("We must throw everything we can at this crisis immediately") and expect their governments to make sure things never get too bad such that they need to personally plan for such terrible situations. Avoiding thinking about things getting too bad and not thinking through taboo tradeoffs seem closely related.

Comment by benito on Coronavirus: Justified Practical Advice Thread · 2020-03-22T10:58:59.600Z · score: 2 (1 votes) · LW · GW

I see, thanks.

Comment by benito on Coronavirus: Justified Practical Advice Thread · 2020-03-22T07:20:18.035Z · score: 2 (1 votes) · LW · GW

Thanks, this is a question I really wanted someone’s numbers on!

You said 70C for 30 mins, then said it again but I thought you were going to say a more extreme quantity the second time. Was that intentional?

Comment by benito on Blog Post Day II · 2020-03-22T01:49:43.308Z · score: 8 (4 votes) · LW · GW

I will once again write a blogpost! (I am busier than last time, but will still try hard. I will write something not about coronavirus.)

Comment by benito on What should we do once infected with COVID-19? · 2020-03-21T01:54:45.130Z · score: 4 (2 votes) · LW · GW

That's great, thanks for the info.

Comment by benito on What should we do once infected with COVID-19? · 2020-03-21T00:45:10.480Z · score: 16 (6 votes) · LW · GW

This sounds like a prime opportunity for people with medical expertise to write guidelines on how to use it for civilians.

Comment by benito on How to optimize my time fighting COVID-19? · 2020-03-20T01:13:23.206Z · score: 2 (1 votes) · LW · GW

This question title was too long to show on frontpage, so I shortened it from "How should I prioritize my limited time in the fight against COVID-19?" to "How to prioritise my time fighting COVID-19?".

Comment by benito on Cost of a COVID-19 test that uses shotgun RNA sequencing? · 2020-03-20T01:11:40.428Z · score: 2 (1 votes) · LW · GW

The title of this question was too long for the frontpage, so I changed it from "How much would a test for COVID-19 that uses shotgun RNA sequencing the way we use shotgun DNA sequencing cost?" to "Cost of a COVID-19 test that uses shotgun RNA sequencing?".

Comment by benito on Should we build thermometer substitutes? · 2020-03-20T01:08:47.317Z · score: 3 (2 votes) · LW · GW

The title of this question was too long to fit on the frontpage, so I changed it from "Would the ability to take people's temperatures without a thermometer be useful?" to "Should we build thermometer substitutes?".

Comment by benito on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-20T01:00:24.462Z · score: 4 (2 votes) · LW · GW

The title was too long for the frontpage, so I shortened it from "Would it be a good idea to do some sort of public outreach right now about existential risks?" to "Is the Covid-19 crisis a good time for x-risk outreach?"

Comment by benito on Programmers Should Plan For Lower Pay · 2020-03-19T18:37:36.500Z · score: 4 (2 votes) · LW · GW

Always remember to model tail events causing all your bets come out the wrong way... :)

Comment by benito on Why Telling People They Don't Need Masks Backfired · 2020-03-19T05:38:48.525Z · score: 18 (7 votes) · LW · GW

Yeah. I regularly model headlines like this as being part of the later levels of simulacra. The article argued that it should backfire, but it also said that it already had. If the article catches on, then it will become true to the majority of people who read it. It's trying to create the news that it's reporting on. It's trying to make something true by saying it is.

I think a lot of articles are like that these days. They're trying to report on what's part of social reality, but social reality depends on what goes viral on twitter/fb/etc, so they work to inject themselves into that social reality by attempting to directly manipulate it. The article is suggesting that it's reporting on social reality, making it exciting for you to read it, but it actually only becomes true if it succeeds in getting a lot of people to read it.

Comment by benito on Credibility of the CDC on SARS-CoV-2 · 2020-03-19T01:22:47.238Z · score: 5 (3 votes) · LW · GW

The highly detailed slideshow on Covid-19 by Michael Lin (PhD-MD) has comments reminiscent of the OP. Lin says:

[The] CDC created a test requiring a slow RT-PCR reaction on a specific model of machine to be run overnight, not designing the right primers, and not realizing this for a month. This was both strategically (using 30-year-old technology) and tactically (designing wrong primers) incompetent. I would expect most graduate students to do better.

He also feels that the CDC is giving lousy information. In their FAQ, their answer to whether your child is at risk for Covid-19 fails to mention that children reliably have much milder disease courses than adults. He says:

It’s clear that kids get less sick if at all. Why doesn’t the CDC say so? It won’t hurt to tell the truth! If you provide such lousy information, people will stop trusting you.

I think this is consistent with the primary goal of communication from major institutions being to prevent people from doing stupid things, over and above being open and honest.

Comment by benito on Why Telling People They Don't Need Masks Backfired · 2020-03-18T05:19:37.336Z · score: 4 (2 votes) · LW · GW

Yeah, I shared this in a comment on the previous LW CDC thread. I also shared The Bizarre Adventures of the Surgical Mask by Piero Scaruffi, a non-news media post that seems like a bunch of similar arguments independently.

Comment by benito on Credibility of the CDC on SARS-CoV-2 · 2020-03-18T05:15:05.891Z · score: 6 (3 votes) · LW · GW

The brief post The Bizarre Adventures of the Surgical Mask by renaissance man-of-lists Piero Scaruffi makes a lot of similar arguments to the NYT article. Some quotes:

I am puzzled by a somewhat amusing phenomenon. There are thousands of people on social media screaming "Stop buying masks! They are useless". That's intriguing. If they are useless, why do you care?

...It is even stranger that some people reply: "hospitals need the masks". So... hospitals think that masks are useful? Then they are not useless. In fact, they seem to be indispensable. Then the correct statement would be: "Stop buying masks! They are extremely useful!"

...Did you notice that perfectly healthy World Health Organization officials always wear masks during their news briefings to reporters? It's because they now believe that you can transmit the virus even if you don't have the symptoms, so potentially anyone around you (healthy or sick) may be contagious.

...I am not competent enough to judge how effective a mask can be in the case of this covid-19. I am just intrigued that so many people have joined the anti-mask crusade despite these obvious logical contradictions.

He’s very independent and doesn’t try to compete in the attention landscape like most blogs, so I take it as a fairly strong datapoint that these are fairly obvious inconsistencies to the public.

Comment by benito on Credibility of the CDC on SARS-CoV-2 · 2020-03-18T00:23:11.789Z · score: 6 (3 votes) · LW · GW

This NYT Opinion Piece discusses some of the same points as the above, titled Why Telling People They Don’t Need Masks Backfired. It closes:

...during disasters, people can show strikingly altruistic behavior, but interventions by authorities can backfire if they fuel mistrust or treat the public as an adversary rather than people who will step up if treated with respect. Given that even homemade masks may work better than no masks, wearing them might be something to direct people to do while they stay at home more, as we all should.

We will no doubt face many challenges as the pandemic moves through our societies, and people will need to cooperate. The sooner we create the conditions under which such cooperation can bloom, the better off we all will be.

Comment by benito on March Coronavirus Open Thread · 2020-03-17T23:22:04.747Z · score: 8 (4 votes) · LW · GW

Today will always be the day that, for one hour, Facebook removed all posts/comments that had links to any of The Atlantic, Medium, and LessWrong. Because we're just that big and important.

(The issue is now fixed.)

Comment by benito on March Coronavirus Open Thread · 2020-03-16T21:07:18.880Z · score: 4 (2 votes) · LW · GW

I thought I'd share the steps my housemates and I have taken to be safe from the coronavirus, just to spread info about what people are doing. I'd be interested in others saying some things that they've done. This isn't exhaustive, I've almost certainly forgotten some things. (Note that we live in Berkeley, California, US.)

  • We don't meet anyone from outside the house / go to work / go shopping, and generally stay in the house/garden. Some housemates take walks / go for a run, keeping 6 feet away from people at all times.
  • If anyone thinks there's a need to leave for any reason, even if it's to help the house in some way, they get permission either during a house meeting or via the house Slack.
  • We've put up proper hand-washing signs by all the sinks, and all do this regularly including around mealtimes.
  • We have stored about 3 months of food and necessities per person. Every 2 weeks, we'll order 2 weeks of food+necessities on Amazon/Instacart, to have a clear headway in case any ordering services end up having month-long delays due to massive rise in demand.
  • We have bought a cheap car for any important travel and also to take trips to places where there are no people (to go for walks and so on).
  • We've covered basically all doorhandles, edges of drawers, and similar surfaces, with copper tape.
  • We leave packages outside for 2 days before opening them, or else using gloves, throw the cardboard away, disinfect the contents with disinfectant wipes, then remove gloves.
  • I'm taking a multi-vitamin each morning for the vitamin D.
  • Oli and I have set up an office space on the top floor outside my bedroom, with desks and monitors and such.

A few other our-situation-specific things we've done.

  • One person recently came back from international travel, so we got them a small solo AirBnb for 10 days and provided them with lots of food and snacks, to confirm that they're symptom free. (They were careful on the flight, using disinfectant wipes on the plastic on their seat, not accepting anything from cabin crew, etc.) They'll return to the house this week.
  • We had a non-rationalist renter who was moving out in the next month or so. They were unfortunately still using public transport and going to work, so we paid for them to have an AirBnb for that duration, and they've now finally moved out.
  • I got a VR headset for fun and exercise (BeatSaber is great!).

There's a bunch more things we've done for the community, though that overlaps with things the LW team has done (e.g. Ray set up a community-wide spreadsheet for people to report the steps they'd taken and what level of exposure they are at).

Comment by benito on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-14T06:18:38.090Z · score: 2 (1 votes) · LW · GW

Note: I've added this post to the Coronavirus Links Database.

Comment by benito on I'm leaving AI alignment – you better stay · 2020-03-14T06:07:59.760Z · score: 8 (5 votes) · LW · GW

Thanks for the writeup.

+1 I appreciated the writeup. Amongst many positive things about the post, I think it really helps for there to be open conversation about what it looks like when things don't work out as hoped, to help people understand what they're signing up for.

Comment by benito on March Coronavirus Open Thread · 2020-03-13T22:54:03.704Z · score: 2 (1 votes) · LW · GW

What is the likelihood of transmission inside the home

My current working assumption is that if you share a living space and/or a bathroom with someone who gets it, you will get it within a few days. If you want to reduce the probability, make sure you don't use any common spaces or bathrooms.

Comment by benito on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-13T20:58:36.437Z · score: 5 (3 votes) · LW · GW

I like this, it's simple, it resolved conceptual tensions I had, and I will start using this. (Obvs I should check in in a few months to see if this worked out.)

Comment by benito on FactorialCode's Shortform · 2020-03-10T05:47:25.391Z · score: 2 (1 votes) · LW · GW

You're definitely missing stuff with the Hacker News search, like once every other month or so get a big hit like this or this.

Comment by benito on Thoughts on LessWrong's Infohazard Policies · 2020-03-09T19:26:54.189Z · score: 4 (2 votes) · LW · GW

Also, I'm trying to understand your thinking. Is this an accurate representation of what you're saying?

While in general, people are allowed to write up poorly thought-out criticism of major governmental institutions during a time of crisis and to advocate against that governmental institution, on LessWrong people should be obligated to ensure the criticism is true before saying it and making that kind of advocacy, because we try to be better than elsewhere.

If the authors of this post had questions about whether their criticism was true or whether this was a good time to say the criticism, they should have vetted it with the biosecurity x-risk people at FHI and OpenPhil before publishing, and given that they didn't do this basic ethical check, the post should be removed until such a time as they do (and they must edit in any advice received).

Comment by benito on Thoughts on LessWrong's Infohazard Policies · 2020-03-09T18:41:50.073Z · score: 2 (1 votes) · LW · GW

Can you copy this comment to the comment thread as well so I can follow up on the post? Edit: Actually nvm, happy to talk here.

Comment by benito on How effective are tulpas? · 2020-03-09T18:39:44.394Z · score: 7 (4 votes) · LW · GW

Mod here, I've edited your question title to actually be a question, so that people will be able to find it more effectively when searching and people on the frontpage can understand the question without opening the post.

Comment by benito on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T07:11:46.924Z · score: 32 (9 votes) · LW · GW

I want to be clear with you about my thoughts on this David. I've spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I've spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.

Let me give my thoughts on the issue of infohazards.

I am honestly not sure what work you think the term is doing in this situation, so I'll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom's career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don't go extinct). Bostrom's infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.

So, calling something an infohazard merely means that it's damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it's worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won't give examples, but Davis Kingsley helpfully published an example that is no longer true in this post if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.

The main reason Bostrom's paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn't be published or sketched out in great detail. As Churchill was shocked by Niels Bohr's plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.

Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world's population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who've thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can't stop them writing anywhere on the internet, but LessWrong has very good SEO and I don't want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.

Now, you're asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions. 

Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.

  • Helping people inform themselves about the virus. As I am writing this message, I'm in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We're trying to use various advice from places like the CDC and the WHO, and it's helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
  • Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn't the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it's failing at.
  • Protecting open, scientific discourse. I'm always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse... then it's weird to think that it's me who's defecting on them when I speak openly and honestly about them. I really don't know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I'm afraid that was not a deal that was on offer, and they're picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.

So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe. 

Could criticising the government here lead to an existential catastrophe?

I don't know your position, but I'll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.

So that's a severe outcome. But it isn't an existential catastrophe. 

(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it's worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time. 

I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I'm not really sure what we'll do then. That quickly moves me from "The next 12 months will see a lot of death and I'm probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured" to "This is the sole focus of my attention for the foreseeable future."

Importantly, I don't really see any clear argument for which way criticism of the CDC plays out in this world.)

And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we've coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future. 

(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we'd take down. There's an example. But criticising the government? It's like a fundamental human right, and not because it would be inconvenient to remove, but because it's the only way to build public trust. It makes no sense to me to silence it.)

The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.

Even if our best policy is to form an unyielding commitment to unlimited freedom of thought, virtually limitless freedom of speech, an extremely wide freedom of inquiry, we should realize not only that this policy has costs but that perhaps the strongest reason for adopting such an uncompromising stance would itself be based on an information hazard; namely, norm hazard: the risk that precious yet fragile norms of truth-seeking and truthful reporting would be jeopardized if we permitted convenient exceptions in our own adherence to them or if their violation were in general too readily excused.

 

Footnote on Unilateralism

I don't see a reasonable argument that this was close to such a situation such that it's a dangerous unilateralist action to write this. This isn't a situation where 95% of people think it's bad but 5% think it's good.

If you want to know whether we've lifted the unilateralist's curse here on LessWrong, you need look no further than the Petrov Day event that we ran, and see what the outcome was. That was indeed my attempt to help LessWrong practise and self-signal that we don't take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.

Comment by benito on March Coronavirus Open Thread · 2020-03-09T01:22:02.754Z · score: 4 (2 votes) · LW · GW

One of the reasons this thread exists is for that content to go here instead :)