Posts

Sunday August 16, 12pm (PDT) — talks by Ozzie Gooen, habryka, Ben Pace 2020-08-14T18:32:35.378Z · score: 28 (5 votes)
Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby 2020-08-06T22:50:21.550Z · score: 32 (6 votes)
Sunday August 2, 12pm (PDT) — talks by jimrandomh, johnswenthworth, Daniel Filan, Jacobian 2020-07-30T23:55:44.712Z · score: 17 (3 votes)
$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is 2020-07-21T18:42:44.704Z · score: 50 (21 votes)
Lessons on AI Takeover from the conquistadors 2020-07-17T22:35:32.265Z · score: 53 (13 votes)
Meta-preferences are weird 2020-07-16T23:03:40.226Z · score: 18 (3 votes)
Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn 2020-07-16T20:04:37.974Z · score: 26 (5 votes)
Mazes and Duality 2020-07-14T19:54:42.479Z · score: 49 (14 votes)
Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong 2020-07-08T00:27:57.876Z · score: 19 (4 votes)
Public Positions and Private Guts 2020-06-26T23:00:52.838Z · score: 21 (6 votes)
Missing dog reasoning 2020-06-26T21:30:00.491Z · score: 28 (9 votes)
Sunday June 28 – talks by johnswentworth, Daniel kokotajlo, Charlie Steiner, TurnTrout 2020-06-26T19:13:23.754Z · score: 26 (5 votes)
DontDoxScottAlexander.com - A Petition 2020-06-25T05:44:50.050Z · score: 118 (41 votes)
Sunday June 21st – talks by Abram Demski, alkjash, orthonormal, eukaryote, Vaniver 2020-06-18T20:10:38.978Z · score: 50 (13 votes)
FHI paper on COVID-19 government countermeasures 2020-06-04T21:06:51.287Z · score: 50 (18 votes)
[Job ad] Lead an ambitious COVID-19 forecasting project [Deadline extended: June 10th] 2020-05-27T16:38:04.084Z · score: 56 (16 votes)
Crisis and opportunity during coronavirus 2020-03-12T20:20:55.703Z · score: 70 (31 votes)
[Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting 2020-02-02T12:39:06.563Z · score: 35 (8 votes)
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z · score: 53 (13 votes)
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z · score: 48 (12 votes)
Running Effective Structured Forecasting Sessions 2019-09-06T21:30:25.829Z · score: 21 (5 votes)
How to write good AI forecasting questions + Question Database (Forecasting infrastructure, part 3) 2019-09-03T14:50:59.288Z · score: 31 (12 votes)
AI Forecasting Resolution Council (Forecasting infrastructure, part 2) 2019-08-29T17:35:26.962Z · score: 31 (13 votes)
Could we solve this email mess if we all moved to paid emails? 2019-08-11T16:31:10.698Z · score: 32 (15 votes)
AI Forecasting Dictionary (Forecasting infrastructure, part 1) 2019-08-08T16:10:51.516Z · score: 43 (23 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 43 (11 votes)
Does improved introspection cause rationalisation to become less noticeable? 2019-07-30T10:03:00.202Z · score: 28 (8 votes)
Prediction as coordination 2019-07-23T06:19:40.038Z · score: 46 (14 votes)
jacobjacob's Shortform Feed 2019-07-23T02:56:35.132Z · score: 18 (3 votes)
When does adding more people reliably make a system better? 2019-07-19T04:21:06.287Z · score: 35 (10 votes)
How can guesstimates work? 2019-07-10T19:33:46.002Z · score: 26 (8 votes)
Can we use ideas from ecosystem management to cultivate a healthy rationality memespace? 2019-06-13T12:38:42.809Z · score: 37 (7 votes)
AI Forecasting online workshop 2019-05-10T14:54:14.560Z · score: 32 (6 votes)
What are CAIS' boldest near/medium-term predictions? 2019-03-28T13:14:32.800Z · score: 35 (10 votes)
Formalising continuous info cascades? [Info-cascade series] 2019-03-13T10:55:46.133Z · score: 17 (4 votes)
How large is the harm from info-cascades? [Info-cascade series] 2019-03-13T10:55:38.872Z · score: 23 (4 votes)
How can we respond to info-cascades? [Info-cascade series] 2019-03-13T10:55:25.685Z · score: 15 (3 votes)
Distribution of info-cascades across fields? [Info-cascade series] 2019-03-13T10:55:17.194Z · score: 15 (3 votes)
Understanding information cascades 2019-03-13T10:55:05.932Z · score: 56 (20 votes)
Unconscious Economics 2019-02-27T12:58:50.320Z · score: 85 (34 votes)
How important is it that LW has an unlimited supply of karma? 2019-02-11T01:41:51.797Z · score: 30 (12 votes)
When should we expect the education bubble to pop? How can we short it? 2019-02-09T21:39:10.918Z · score: 41 (12 votes)
What is a reasonable outside view for the fate of social movements? 2019-01-04T00:21:20.603Z · score: 36 (12 votes)
List of previous prediction market projects 2018-10-22T00:45:01.425Z · score: 33 (9 votes)
Four kinds of problems 2018-08-21T23:01:51.339Z · score: 41 (19 votes)
Brains and backprop: a key timeline crux 2018-03-09T22:13:05.432Z · score: 89 (24 votes)
The Copernican Revolution from the Inside 2017-11-01T10:51:50.127Z · score: 150 (74 votes)

Comments

Comment by jacobjacob on What are good rationality exercises? · 2020-09-30T02:27:27.127Z · score: 2 (1 votes) · LW · GW

There could be ways of making it legal given that we're a non-profit with somewhat academic interests. (By "making" I mean actually changing the law or getting a No-Action Letter.) Most people who do gambling online do it for profit, which is where things get tricky. 

Comment by jacobjacob on On Destroying the World · 2020-09-29T18:47:31.323Z · score: 14 (4 votes) · LW · GW

I'm genuinely confused about the "pressing the button for entertainment value". 

The email contained sentences like: 

Honoring Petrov Day: I am trusting you with the launch codes. [...] On Petrov Day, we celebrate and practice not destroying the world. [...] You've been given the opportunity to not destroy LessWrong. [...] if you enter the launch codes below on LessWrong, [you will remove] a resource thousands of people view every day.

And no sentences playfully inviting button-pressing. 

Maybe I can't unsee the cultural context I already had. But I still imagine that after receiving that email, I'd feel pretty bad/worried about pressing.

Comment by jacobjacob on On Destroying the World · 2020-09-29T18:47:14.576Z · score: 16 (5 votes) · LW · GW

Is there anyone that would have pressed the button if there was guaranteed anonymity, and thus no personal cost? If so, make a second account


If I understand you correctly, that won't work. The identity of the button-presser is not determined by which account pressed the button. It's determined by the launch code string itself -- everyone got a personalised launch code. (Which means that if someone stole and used your personalised code, you'd also get blamed -- but that seems fair.)

Comment by jacobjacob on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T18:25:31.782Z · score: 2 (1 votes) · LW · GW

Let's give Habryka a little more respect, eh?

I feel confused about how you interpreted my comment, and edited it lightly. For the record, Habryka's comment seems basically right to me; just wanted to add some nuance. 

Comment by jacobjacob on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T17:42:50.862Z · score: 2 (1 votes) · LW · GW

What exactly do you think "the lesson we need to take away from this" is?

(Feel free to just link if you wrote that elsewhere in this comment section)

Comment by jacobjacob on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T06:50:37.081Z · score: 14 (10 votes) · LW · GW

So, I think it's important that LessWrong admins do not get to unilaterally decide that You Are Now Playing a Game With Your Reputation. 

However, if Chris doesn't want to play, the action available to him is simply to not engage. I don't think he gets to both press the button and change the rules to decide what a button press means to other players. 

Comment by jacobjacob on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T06:46:28.252Z · score: 18 (7 votes) · LW · GW

Well, they did succeed, so for that they get points, but I think it was more due to a very weak defense on behalf of the victim rather than a very strong effort by petrov_day_admin_account. 

Like, the victim could have noticed things like: 
* The original instructions were sent over email + LessWrong message, but the phishing attempt was just LessWrong
* The original message was sent by Ben Pace, the latter by petrov_day_admin_account
* They were sent at different points in time, the latter of which was more correlated by the FB post that caused the phishing attempt

Moreover, the attacker even sent messages to two real LessWrong team members, which would have completely revealed the attempt had those admins not been asleep in a different time zone.

Comment by jacobjacob on jacobjacob's Shortform Feed · 2020-09-27T01:37:06.242Z · score: 2 (1 votes) · LW · GW (testing the editor)
Comment by jacobjacob on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T23:33:29.319Z · score: 11 (5 votes) · LW · GW

EDIT: I now believe the below contains substantial errors, after reading this message from the attacker. 

Maybe you want to do sleuthing on your own, if so don't read below. (It uses LessWrong's spoiler feature.)

I believe the adversary was a person outside of the EA and rationality communities. They had not planned this, and they did not think very hard about who they sent the messages to (and didn't realise Habryka and Raemon were admins). Rather, they saw a spur-of-the-moment opportunity to attack this system after seeing a Facebook post by Chris Leong (which solicited reasons for and against pressing the button). I believe this because they commented on that Chris Leong posted and say they sent the message.

Comment by jacobjacob on The rationalist community's location problem · 2020-09-24T06:16:25.160Z · score: 19 (8 votes) · LW · GW

Here are some other considerations. They sort of overlap yours; but some people might find these frames carve things at a more helpful level of abstraction. 
 

Groundedness: Some people encounter rationality ideas and either go crazy, or do lots of harm to themselves (for example, by working themselves into burnout or depression from a sense of moral guilt). Living locations can be more or less conducive to this. Berkeley seems particularly bad -- it's filled with a pretty trippy aesthetic. It feel unsafe/unwholesome in terms of various problems with homelessness, crime, etc. Oxford is a lot better. It's small, calm, beautiful, safe and with a very stable and historic culture. Though it's still not on the Pareto frontier of groundedness. 

Proximity to power (or greatness on some other dimension): Hubs are real. People go to San Francisco to start startups, LA to become actors, London to work in finance, DC to work in think tanks... and so forth. For me this was an almost overwhelming consideration in wanting to live near San Francisco. Nowhere else has such a remarkable diversity of ambitious intellectuals; people like Jonathan Blow, Bret Victor, Peter Thiel (yes, I know he left eventually), Elon Musk, Michael Nielsen, the YC crowd, random people like the guy who wrote Thinking Physics and many many others... Whenever I did not live here, I'd pay a lot of attention to where interesting people and projects where located. And a ridiculously high number of roads would lead back to SF. 

Comment by jacobjacob on The rationalist community's location problem · 2020-09-24T05:58:34.167Z · score: 2 (1 votes) · LW · GW

Ah, kudos for bringing this discussion to a somewhat centralised placed! 

Comment by jacobjacob on What happens if you drink acetone? · 2020-09-17T00:57:34.337Z · score: 6 (4 votes) · LW · GW

dynomight (OP) -- if you write more of it, I'd probably read it (subscribed to your posts). 

Comment by jacobjacob on Is community-collaborative article production possible? · 2020-09-09T17:22:22.009Z · score: 8 (4 votes) · LW · GW

How did this go? Did you get around to piloting this? 

Comment by jacobjacob on Sunday September 6, 12pm (PT) — Casual hanging out with the LessWrong community · 2020-09-06T18:41:12.433Z · score: 2 (1 votes) · LW · GW

Thanks, fixed!

Comment by jacobjacob on Multitudinous outside views · 2020-08-18T06:49:16.731Z · score: 4 (2 votes) · LW · GW

I feel like this post made the point "You can come up with many plausible outside views for a given question". 

But it didn't really give me what I wanted when I clicked the title: discussion of how to choose between outside views (whether those be concrete heuristics or philosophical arguments). 

I'd be very curious to read some of your battles stories or case studies on this from your superforecasting years. 

Comment by jacobjacob on How to learn from conversations · 2020-08-18T04:43:44.463Z · score: 5 (3 votes) · LW · GW

I'm glad this post exists. 

Back at university, I used to find conversations difficult, and regularly failed to make new friends and connections. 

At the same time, I had a close friend who would walk up to senior professors or startup founders after they'd given talks, without knowing them from before, and after 15 min of conversation end up laughing together, and getting invited to various cool opportunities. 

I was in awe. And I was confused. I asked him how he did it. His answer was very similar to some of the advice in this post. 

This was different from what I had been doing, which was roughly "Say something that sounds very complicated and insightful, and you'll impress them". (Ah, my youthful folly!)

I now think one of the core things allowing a good conversation to happen is to actually connect with someone -- coming to understand each other, and have the things you say follow either build on, or riff off of, each other in genuinely meaningful ways. There are different ways for those meaningful exchanges to happen. Metaphorically, they can be like ping-pong games, dances, musical jams, raising a barn or exploring a jungle. However, when conversations fail, they feel more like a blisterfeld, or two loudspeakers playing two different songs at the same time. 

I think this is also what makes your techniques tick. They're largely about building a particular kind of meaningful connection. (But I don't think they capture all of the art of conversation. For example, they don't cover "jamming" style conversations.)

Comment by jacobjacob on Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby · 2020-08-09T18:44:49.290Z · score: 13 (3 votes) · LW · GW

Thanks for letting us know lionhearted. You're welcome back another week if you have a talk you feel better about! :)

To all attendees -- the event will go ahead as planned with a replacement speaker (me!). 

Comment by jacobjacob on Forecasting Newsletter: July 2020. · 2020-08-02T02:21:54.340Z · score: 4 (2 votes) · LW · GW

Great work with this newsletter, keep it up. It's by far the best forecasting digest I've seen!

The major friction for me is that some of the formatting makes it feel overwhelming. Maybe use bold headings instead of bullet points for each new entry? Not sure. 

Comment by jacobjacob on "Can you keep this confidential? How do you know?" · 2020-07-31T02:42:47.348Z · score: 11 (3 votes) · LW · GW

This post was presented as a brief talk during the LessWrong curated talks event on July 19

Here is a transcription of the Q&A session following the talk. 

---

mr-hire: I'm curious if you've thought about what training this at scale would look like. When I was younger, I remember being trained a little bit on how to keep secrets, just with friends and stuff, but what would it look like if you had ways to train more deliberately?

Raemon: I know some people who have actually thought about this a lot and have entire world views and practices oriented around it. For me, I noticed that I wasn’t good at keeping secrets, which was at times a problem for my relationships. So I actively thought about it for a while and increased my ability.

So the short answer is: do lots of thinking and practicing. To answer your question about training at scale, I have two ways that you can teach this.

First, you can teach people trigger-action plans for dealing with secrets. A key piece of such trigger-action plans is the ability to notice when you're in a conversation that might create  sensitive information or bear on information you already have that is confidential.

Being able to notice that is much of the battle, and then having follow up actions like “slow down and think before you say each new thing”, or “deflect the conversation in a new direction”. So that's a skill I came up with on my own. 

Another skill someone else pointed out, for situations where you find out confidential information about an organization, is to have different mental models for storing public and private information and keeping track of them differently. 

And then, when having a conversation, live inside one of the two different models. This doesn't work for me yet. I haven't really tried to do it, but it's a thing that seems to work for at least two people that I know of.

---

johnswentworth: So given that a big part of a secret is the fact that a secret exists, how do you ever trust that someone can keep secrets if they tell you that they have kept secrets before?

Raemon: Well, there technically was a whole second half of this talk. I have multiple blog posts coming that deal with a lot of like, "Hmm, this topic sucks. What do we do?" One of the problems is that people don't even necessarily mean the same thing by secret. Sometimes it means they just don't bring it up. Sometimes, it means they do not reveal any Bayesian information that can possibly inform people that the secret even exists.

And sometimes, it's just like Alice saying, "Bob, just don't tell Charlie. You can tell Dave. Just make sure it doesn't get back to Charlie." So what I try to do, noticing that we’re getting into a sensitive situation (or ideally if I have detected that the person I am speaking with has an ongoing, long-term relationship with the person where secrets are likely to come up), I try to have a meta-conversation about what secrecy means to them, discussing what the various parameters of secrecy are and which ones are the most important to them, before a specific secret comes up.

Sometimes, the duration of a secret matters and  you need to keep it to your grave. Other times  it could simply be a controversial thing happening in the next three months and I need you to keep it quiet until it is over. 

Generally, I don’t think it is actually tractable to not give Bayesian information that you have any secrets. I think a slightly better equilibrium is where there's some glomarization of “I can neither confirm nor deny that I have secrets relating to this thing”, and you just always say that whenever this conversation comes up. And then, you get into the meta a bit. A practice that I like to do while having this meta-conversation is avoid making eye contact. Not looking at each other while having the conversation ensures  our micro-expressions aren't betraying any information until we have built up a little bit of trust.

Ben Pace: I am also much more likely to accept a secret if it's on a three-month scale rather than a four-year or  permanent time scale. 

---

ricraz: I guess if we take a Hansonian perspective and we say, "Is keeping secrets really actually about keeping secrets...?"

Raemon: Oh my.

ricraz: A lot of people around here have quite high scrupulosity. I am fairly low on this. A lot of the time when I say to somebody, "Please keep it a secret," it's just mostly a social nicety. Maybe it reveals some information about them if they don't keep it secret, and maybe I'd feel a bit annoyed... But it feels much more like it's a standard part of the interaction I'm having with them. Maybe I'm ticking a box or something. 

It feels fairly rare that I'm telling somebody information that it's crucial for them to keep secret. And so, I wonder if actually outside a corporate context like, "Please don't give away our secret product plan," and stuff like that, how relevant is this actually in most social interactions?

Raemon: So another pet peeve is that, most of the time, I think people are doing something like what you just said--keep it on the downlow. Keep R0 of the secret <1 if you can. 

The tricky bit is that's what people need most of the time, but if the secret ends up causing them more damage than they expected, then they're like, "No, you said you would keep it a secret," and then retroactively judge you harsher than you might have assumed. 

So one of the key things I want is transparency about what level of secret we're talking about. So if people start telling me anything confidential, one of the first things I say is, "FYI, the default thing I am going to offer you is, I will try a little bit to keep R0 of the secret less than one, but I'm not going to try that hard. And if you want more than that, you are asking me a favor and I'm checking if you want to ask me that favor."

And most of the time, they don't actually care about that, the more high-level secret. I do think in the rational sphere and the ecosystem, there's a lot of blurry lines between, "Ah, I'm just commenting about my friend," and, "Oh, my comments about my friend actually directly inform whether some other person's going to give that friend a grant," which makes this all a bit trickier. So the main thing I want to get out of this is to have common knowledge of what the default norms are and of what favors you're actually asking of people.

jacobjacob: So I heard you ask,  Richard, if  this really matters or if it matters only if you know some important intellectual property due to your work? And I think there's a thing where a small secret with someone might be fine, but if you need to keep track of 10 or 100 small secrets across a large number of people, then whenever you say a sentence, you have to run this working-memory process which checks the sentence against all these secrets you have to keep track of, in various weird interactions. And you get to a point where this messes up your ability to have conversations or to think clearly, even if the individual secrets themselves are not super important.

Ben Pace: Yeah. I don't enjoy how much obfuscating  some people have to do in conversations with me when we're trying to talk about the same thing, but they can't say anything because it would betray that they know some information that I also know. I was expecting you to say something like “it's also hard to tell if someone's good at keeping secrets, and so, being able to keep secrets about small things is often the only way you can even check. If someone only keeps secrets about massive important things, it's often hard to know about those things, because you're not privy to them”. And so, it helps the person doing the right thing in smaller iterated games. 

---

George Lanetz: When I thought about this problem a while back, I decided it's impossible to really keep my privacy. So I committed to a life that won't require that. From your talks with other people, how often did you find that they actually care about their privacy? And how many people do that if they are not presidents or something like that?

Raemon: I run into social circles that are one or two steps removed from a lot of EA grants, and this comes up a lot there, where there's informal situations that turn out to have fairly strong financial stakes. So I think I've probably interacted with 10 people, plus or minus a few, where some kind of common understanding of what secrecy meant mattered.

Ben Pace: I also have really strong feelings about the things George said. I think Vitalik Buterin has said something relevant here. Namely that the less privacy you have, the more Hansonian things get: the more signaling that you're doing on all of your supposedly “private” conversations. I think the internet has generally done this a lot. People used to be able to have informal bonds of trust, but then suddenly it gets recorded and put online.

And then, a million people see it and you're like, "I can't live my life where a million people are watching all of my interactions, all of my moves, checking whether or not they're good." It would talk more time to make this point in full.

Comment by jacobjacob on Sunday August 2, 12pm (PDT) — talks by jimrandomh, johnswenthworth, Daniel Filan, Jacobian · 2020-07-31T02:07:59.527Z · score: 2 (1 votes) · LW · GW

Gah, thanks, I'm too sloppy with these announcements. Will improve!

Comment by jacobjacob on Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn · 2020-07-19T19:13:02.820Z · score: 8 (4 votes) · LW · GW

You'll have to file a request with the LessWrong Customer Service Team, with office hours 9am to 5pm Pacific time, Monday through Thursday, and a lunch break at 12-1pm. 

Processing times for requests like this are usually around 4-5 days. 

Thank you for your patience. 

Comment by jacobjacob on Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn · 2020-07-19T19:02:39.955Z · score: 2 (1 votes) · LW · GW

Woop!

Comment by jacobjacob on Lessons on AI Takeover from the conquistadors · 2020-07-18T03:09:52.494Z · score: 2 (1 votes) · LW · GW

Lol, fixed, thanks. 

Comment by jacobjacob on Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn · 2020-07-16T23:18:24.524Z · score: 2 (1 votes) · LW · GW

Thanks, fixed. 

Comment by jacobjacob on Mazes and Duality · 2020-07-15T00:59:01.280Z · score: 2 (1 votes) · LW · GW

Thanks! David did most of the work. :)

Comment by jacobjacob on Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong · 2020-07-08T19:12:21.008Z · score: 2 (1 votes) · LW · GW

Fixed, thanks!

Comment by jacobjacob on DontDoxScottAlexander.com - A Petition · 2020-06-25T19:39:53.486Z · score: 5 (3 votes) · LW · GW

We can remove duplicates. Thanks for highlighting. 

Comment by jacobjacob on Sunday June 21st – talks by Abram Demski, alkjash, orthonormal, eukaryote, Vaniver · 2020-06-21T17:17:13.764Z · score: 2 (1 votes) · LW · GW

Added! :)

Comment by jacobjacob on Covid-19 6/4: The Eye of the Storm · 2020-06-05T19:57:01.556Z · score: 3 (2 votes) · LW · GW

Any chance you could start including graphs rather than tables in your covid posts? :) 

Comment by jacobjacob on Covid-19: My Current Model · 2020-06-05T19:20:19.039Z · score: 8 (6 votes) · LW · GW

I don't think "too much Covid content" is the major problem here. Rather, the major problem with this essay is that it mostly states Zvi's updates, without sharing the data and reasoning that led him to make those updates. It's not going to convince anyone who doesn't already trust Zvi.  

This is perhaps an acceptable trade-off if people have to move fast and make decisions without being able to build their own models. But it's an emergency measure that's very costly long-term. 

And for the most important decisions, it is especially important that the people who make them build their own models of the situation. 

(For the record: I think Zvi's thinking on Covid is mostly extremely sensible, and I disagree with Sherrinford's comment below. So this is not about whether he's right or not. I'd bet he is, on average.)

Comment by jacobjacob on [Job ad] Lead an ambitious COVID-19 forecasting project [Deadline extended: June 10th] · 2020-05-27T16:48:35.918Z · score: 8 (4 votes) · LW · GW

I'm also posting a bounty for suggesting good candidates: $1000 for successful leads on a new project manager; $100 for leads on a top 5 candidate

DETAILS

We will pay you $1000 if you:

  • Send us the name of a person…
  • …who we did not already have on our list…
  • …who we contacted because of your recommendation...
  • ...who ends up taking on the role

We will pay you $100 if the person ends up among the top 5 candidates (by our evaluation), but does not take the role (given the other above constraints).

There’s no requirement for you to submit more than just a name. Though, of course, providing intros, references, and so forth, would make it more likely that we could actually evaluate the candidate.

NO bounty will be awarded if you...

  • Mention the person who actually gets hired, but we never see your message
  • Mention a person who does not get hired/become a top 5 candidate
  • Nominate yourself and get hired
  • If multiple people nominate the same person, bounty goes to the first person whose nomination we actually read and act on

Remaining details will be at our discretion. Feel free to ask questions in comments.

You can private message me here.

Comment by jacobjacob on March Coronavirus Open Thread · 2020-04-02T23:27:19.126Z · score: 10 (5 votes) · LW · GW
In a few weeks, a number of public figures may find themselves doing an awkward about-face from "masks don't work and no one should wear them" to "masks do work and they are mandatory".

I want to record and reward how this prediction seems to be correct: https://www.washingtonpost.com/health/2020/04/02/coronavirus-facemasks-policyreversal/

Comment by jacobjacob on March 24th: Daily Coronavirus Link Updates · 2020-03-26T20:37:24.522Z · score: 2 (1 votes) · LW · GW

We used parameters based on a paper modelling Wuhan, that found that ~2 day infectious period predicted spread the best.

Adding cumulative statistics is in the pipeline; I or one of the devs might get around to it today.

Comment by jacobjacob on How can we estimate how many people are C19 infected in an area? · 2020-03-19T20:43:20.974Z · score: 11 (7 votes) · LW · GW

There's currently a Foretold community attempting to answer this question here, using both general Guesstimate models and human judgement taking into account the nuances of each country. We've hired some superforecasters from Good Judgement who will start working on it in a few days.

Comment by jacobjacob on Crisis and opportunity during coronavirus · 2020-03-17T17:28:46.904Z · score: 13 (3 votes) · LW · GW

After working on a pandemic forecasting dashboard for a week, I should add an additional reason for why this is a good opportunity:

Access to resources

Software developers are an incredibly scarce resource, and they'll charge massive salaries compared to many other jobs. But over the last week, I've received numerous offers from devs who are willing to volunteer 15+ hours a week.

Human attention is also scarce and it's hard to contact people. But when our team reached out to more senior connections or collaborators, we've had a 100% reply rate.

If you're working on important covid-19 projects, there's an incredible number of people willing to help out at prices far below market rate.

Comment by jacobjacob on March Coronavirus Open Thread · 2020-03-13T04:48:48.560Z · score: 3 (2 votes) · LW · GW

If this was the case it ought to be visible indirectly through its effect on Ohio's healthcare system. I haven't heard of such reports (and I do follow the situation fairly closely), but I haven't looked for them either.

Comment by jacobjacob on [deleted post] 2020-03-09T11:20:16.673Z

I adapted Eli Tyre's model into a spreadsheet where you can calculate current number of cases in your country (by extrapolating from observed cases using some assumptions about doubling time and confirmation rate).

Comment by jacobjacob on Model estimating the number of infected persons in the bay area · 2020-03-09T10:07:05.588Z · score: 3 (2 votes) · LW · GW

I made a new version of your spreadsheet where you can select your location (from the John Hopkins list), instead of just looking at the Bay area.

Comment by jacobjacob on Model estimating the number of infected persons in the bay area · 2020-03-09T08:00:03.165Z · score: 6 (3 votes) · LW · GW

Whereas the local steps are fairly clear, after a quick read I found it moderately confusing what this model was doing at a high-level, and think some distillation could be helpful.

Comment by jacobjacob on Coronavirus: Justified Practical Advice Thread · 2020-03-08T12:05:41.514Z · score: 3 (2 votes) · LW · GW
There is a 5% chance of getting critical form of COVID (source: WHO report)

That's a 40-page report and quickly ctrl-f:ing "5 %" didn't find anything to corroborate your claim, so it would be helpful if you could elaborate on that.

Comment by jacobjacob on Blog Post Day (Unofficial) · 2020-02-18T19:10:49.220Z · score: 3 (2 votes) · LW · GW

What time zone will this be in?

There's a >20% chance I'll join. There's a much higher chance I'll show up to write some comments (which can also be an important thing).

I'm happy you're making this happen.

Comment by jacobjacob on [Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting · 2020-02-14T22:24:51.082Z · score: 3 (2 votes) · LW · GW
I think it's useful to be able to translate between different ontologies

This is one thing that is done very well by apps like Airtable and Notion, in terms of allowing you to show the same content in different ontologies (table / kanban board / list / calendar / pinterest-style mood board).

Similarly, when you’re using Roam for documents, you don’t have to decide upfront “Do I want to have high-level bullet-points for team members, or for projects?“. The ability to embed text blocks in different places means you can change to another ontology quite seamlessly later, while preserving the same content.

Ozzie Gooen pointed out to me that this is perhaps an abuse of terminology, since "the semantic data is the same, and that typically when 'ontology' is used for code environments, it describes what the data means, not how it’s displayed."

I think in response, the thing I'm pointing at that seems interesting is that there is a bit of a continuum between different displays and different semantic data — two “displays” which are easily interchangeable in Roam will not be in Docs or Workflowy, as they lack the “embed bullet-point” functionality. Even though superficially they’re both just bullet-point lists.

Comment by jacobjacob on Bayes-Up: An App for Sharing Bayesian-MCQ · 2020-02-07T07:14:02.266Z · score: 2 (1 votes) · LW · GW

So far about 30'000 questions have been answered by about 1'300 users since the end of December 2019.

That's a surprisingly high number of people. Curious where they came from?

Comment by jacobjacob on how has this forum changed your life? · 2020-02-02T20:04:13.597Z · score: 16 (5 votes) · LW · GW
If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you'll see the sort of discussion we tend to have best on LessWrong. I don't come here to get 'life-improvements' or 'self-help', I come here much more to be part of a small intellectual community that's very curious about human rationality.

I wanted to follow up on this a bit.

TLDR: While LessWrong readers tangentially care a lot about self-improvement, reading forums alone likely won't have a big effect on life success. But that's not really that relevant; the most relevant thing to look at is how much progress the community have done on the technical mathematical and philosophical questions it has focused most on. Unfortunately, that discussion is very hard to have without spending a lot of time doing actual maths and philosophy (though if you wanted to do that, I'm sure there are people who would be really happy to discuss those things).

___

If what you wanted to achieve was life-improvements, reading a forum seems like a confusing approach.

Things that I expect to work better are:

  • personally tailored 1-on-1 advice (e.g. seeing a sleep psychologist, a therapist, a personal trainer or a life coach)
  • working with great mentors or colleagues and learning from them
  • deliberate practice ― applying techniques for having more productive disagreements when you actually disagree with colleagues, implementing different productivity systems and seeing how well they work for you, regularly turning your beliefs into predictions and bets checking how well you're actually reasoning
  • taking on projects that step the right distance beyond your comfort zone
  • just changing whatever part of your environment makes things bad for you (changing jobs, moving to another city, leaving a relationship, starting a relationship, changing your degree, buying a new desk chair, ...)

And even then, realistic expectations for self-improvement might be quite slow. (Though the magic comes when you manage to compound such slow improvements over a long time-period.)

There's previously been some discussion here around whether being a LessWrong reader correlates which increased life success (see e.g. this and this).

As a community, the answer seems to be overwhelmingly positive. In the span of roughly a decade, people who combined ideas about how to reason under uncertainty with impartial altruistic values, and used those to conclude that it would be important to work on issues like AI alignment, have done some very impressive things (as judged by an outside perspective). They've launched billion dollar foundations, set up 30+ employee research institutes at some of the worlds most prestigious universities, and gotten endorsements from some of the world's richest and most influential people, like Elon Musk and Bill Gates. (NOTE: I'm going to caveat these claims below.)

The effects on individual readers are a more complex issue and the relevant variables are harder to measure. (Personally I think there will be some improvements in something like "the ability to think clearly about hard problems", but that that will largely stem from readers of LessWrong already being selected for being the kinds of people who are good at that.)

Regardless, like Ben hints at, this partly seems like the wrong metric to focus on. This is the caveat.

While interested in self-improvement, one of the key things people at LessWrong have been trying to get at is reasoning safely about super intelligences. To take a problem that's far in the future, where the stakes are potentially very high, where there is no established field of research, and where thinking about it can feel weird and disorienting... and still trying to do so in a way where you get to the truth.

So personally I think the biggest victories are some impressive technical progress in this domain. Like, a bunch of maths and much conceptual philosophy.

I believe this because I have my own thoughts about what seems important to work on and what kinds of thinking make progress on those problems. To share those with someone who haven't spent much time around LessWrong could take many hours of conversation. And I think often they would remain unconvinced. It's just hard to think and talk about complex issues in any domain. It would be similarly hard for me to understand why a biology PhD student thinks one theory is more important than another relying only on the merits of the theories, without any appeal to what other senior biologists think.

It's a situation where to understand why I think this is important someone might need to do a lot of maths and philosophy... which they probably won't do unless they already think it is important. I don't know how to solve that chicken-egg problem (except for talking to people who were independently curious about that kind of stuff). But my not being able to solve it doesn't change the fact that it's there. And that I did spend hundreds of hours engaging with the relevant content and now do have detailed opinions about it.

So, to conclude... people on LessWrong are trying to make progress on AI and rationality, and one important perspective for thinking about LessWrong is whether people are actually making progress on AI and rationality. I'd encourage you (Jon) to engage with that perspective as an important lens through which to understand LessWrong.

Having said that, I want to note that I'm glad that you seem to want to engage in good faith with people from LessWrong, and I hope you'll have some interesting conversations.

Comment by jacobjacob on The Loudest Alarm Is Probably False · 2020-01-25T21:11:45.882Z · score: 10 (5 votes) · LW · GW

I'd be quite curious about more concrete examples of systems where there is lots of pressure in *the wrong direction*, due to broken alarms. (Be they minds, organisations, or something else.) The OP hints at it with the consulting example, as does habryka in his nomination.

I strongly expect there to be interesting ones, but I have neither observed any nor spent much time looking.

Comment by jacobjacob on 2018 Review: Voting Results! · 2020-01-24T15:56:24.468Z · score: 10 (5 votes) · LW · GW

That seems like weak evidence of karma info-cascades: posts with more karma get more upvotes *simply because* they have more karma, in a way which ultimately doesn't correlate with their "true value" (as measured by the review process).

Potential mediating causes include users being anchored by karma, or more karma causing a larger share of the attention of the userbase (due to various sorting algorithms).

Comment by jacobjacob on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T22:58:29.050Z · score: 27 (10 votes) · LW · GW

Overall I'm still quite confused, so for my own benefit, I'll try to rephrase the problem here in my own words:

Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you'll finish your thesis next week if you just try hard enough.

But in general, simply taking out some mental stuff and inserting an equal amount of something else isn't necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms -- but often without any meta-level paradigm-shifting skills.

Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it's not an adequate response for me to say "do you want to double crux about that?" for the same reason that reading bible verses isn't adequate advice to a reluctant atheist tentatively hanging around church.

I don’t think all techniques are symmetric, or that there aren't ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.

But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”

I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna's example, finishing your PhD 4 months earlier). In fact, they've been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.

There can be many explanations of what's going on, and I'm not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.

I can imagine inside views that might generate discomfort like this.

  • "If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I'm not one of those I lose my ability to contribute to the world and the things I care about won’t matter."
  • "If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I'm lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don't know what to do about a problem."

I don't know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.

(Note: this commented was heavily edited for more clarity following some feedback)

Comment by jacobjacob on jacobjacob's Shortform Feed · 2020-01-14T21:08:48.584Z · score: 12 (3 votes) · LW · GW

I saw an ad for a new kind of pant: stylish as suit pants, but flexible as sweatpants. I didn't have time to order them now. But I saved the link in a new tab in my clothes database -- an Airtable that tracks all the clothes I own.

This crystallised some thoughts about external systems that have been brewing at the back of my mind. In particular, about the gears-level principles that make some of them useful, and powerful,

When I say "external", I am pointing to things like spreadsheets, apps, databases, organisations, notebooks, institutions, room layouts... and distinguishing those from minds, thoughts and habits. (Though this distinction isn't exact, as will be clear below, and some of these ideas are at an early stage.)

Externalising systems allows the following benefits...

1. Gathering answers to unsearchable queries

There are often things I want lists of, which are very hard to Google or research. For example:

  • List of groundbreaking discoveries that seem trivial in hindsight
  • List of different kinds of confusion, identified by their phenomenological qualia
  • List of good-faith arguments which are elaborate and rigorous, though uncertain, and which turned out to be wrong

etc.

Currently there is no search engine (but the human mind) capable of finding many of these answers (if I am expecting a certain level of quality). But for that reason researching the lists is also very hard.

The only way I can build these lists is by accumulating those nuggets of insight over time.

And the way I make that happen, is to make sure to have external systems which are ready to capture those insights as they appear.

2. Seizing serendipity

Luck favours the prepared mind.

Consider the following anecdote:

Richard Feynman was fond of giving the following advice on how to be a genius. [As an example, he said that] you have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: "How did he do it? He must be a genius!"

I think this is true far beyond beyond intellectual discovery. In order for the most valuable companies to exist, there must be VCs ready to fund those companies when their founders are toying with the ideas. In order for the best jokes to exist, there must be audiences ready to hear them.

3. Collision of producers and consumers

Wikipedia has a page on "Bayes theorem".

But it doesn't have a page on things like "The particular confusion that many people feel when trying to apply conservation of expected evidence to scenario X".

Why?

One answer is that more detailed pages aren't as useful. But I think that can't be the entire truth. Some of the greatest insights in science take a lot of sentences to explain (or, even if they have catchy conclusions, they depend on sub-steps which are hard to explain).

Rather, the survival of Wikipedia pages depends on both those who want to edit and those who want to read the page being able to find it. It depends on collisions, the emergence of natural Schelling points for accumulating content on a topic. And that's probably something like exponentially harder to accomplish the longer your thing takes to describe and search for.

Collisions don't just have to occur between different contributors. They must also occur across time.

For example, sometimes when I've had 3 different task management systems going, I end up just using a document at the end of the day. Because I can't trust that if I leave a task in any one of the systems, future Jacob will return to that same system to find it.

4. Allowing collaboration

External systems allow multiple people to contribute. This usually requires some formalism (a database, mathematical notation, lexicons, ...), and some sacrifice of flexibility (which grows superlinearly as the number of contributors grow).

5. Defining systems extensionally rather than intensionally

These are terms from analytic philosophy. Roughly, the "intension" of the concept "dog" is a furry, four-legged mammal which evolved to be friendly and cooperative with a human owner. The "extension" of "dog" is simply the set of all dogs: {Goofy, Sunny, Bo, Beast, Clifford, ...}

If you're defining a concept extensionally, you can simply point to examples as soon as you have some fleeting intuitive sense of what you're after, but long before you can articulate explicit necessary and sufficient conditions for the concept.

Similarly, an externalised system can grow organically, before anyone knows what it is going to become.

6. Learning from mistakes

I have a packing list database, that I use when I travel. I input some parameters about how long I'll be gone and how warm the location is, and it'll output a checklist for everything I need to bring.

It's got at least 30 items per trip.

One unexpected benefit from this, is that whenever I forget something -- sunglasses, plug converters, snacks -- I have a way to ensure I never make that mistake again. I simply add it to my database, and as long as future Jacob uses the database, he'll avoid repeating my error.

This is similar to Ray Dalio's Principles. I recall him suggesting that the act of writing down and reifying his guiding wisdom gave him a way to seize mistakes and turn them into a stronger future self.

This is also true for the Github repo of the current project I'm working on. Whenever I visit our site and find a bug, I have a habit of immediately filing an issue, for it to be solved later. There is a pipeline whereby these real-world nuggets of user experience -- hard-worn lessons from interacting with the app "in-the-field", that you couldn't practically have predicted from first principles -- get converted into a better app. So, whenever a new bug is picked up by me or a user, in addition to annoyance, it causes a little flinch of excitement (though the same might not be true for our main developer...). This also relates to the fact that we're dealing in code. Any mistake can be improved in such a way that no future user will experience it.

Comment by jacobjacob on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T20:40:15.490Z · score: 2 (1 votes) · LW · GW

For some reason seeing all this concreteness made me more excited/likely to try this technique.

Comment by jacobjacob on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T09:52:59.852Z · score: 6 (3 votes) · LW · GW

I'm curious, could you share more details about what patterns you observed, and which heuristics you actually seemed to use?