Posts

Crisis and opportunity during coronavirus 2020-03-12T20:20:55.703Z · score: 68 (29 votes)
How do risks differ between locations? (re: Coronavirus) 2020-03-08T13:35:08.169Z · score: 10 (3 votes)
[Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting 2020-02-02T12:39:06.563Z · score: 35 (8 votes)
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z · score: 52 (12 votes)
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z · score: 48 (12 votes)
Running Effective Structured Forecasting Sessions 2019-09-06T21:30:25.829Z · score: 21 (5 votes)
How to write good AI forecasting questions + Question Database (Forecasting infrastructure, part 3) 2019-09-03T14:50:59.288Z · score: 31 (12 votes)
AI Forecasting Resolution Council (Forecasting infrastructure, part 2) 2019-08-29T17:35:26.962Z · score: 31 (13 votes)
Could we solve this email mess if we all moved to paid emails? 2019-08-11T16:31:10.698Z · score: 32 (15 votes)
AI Forecasting Dictionary (Forecasting infrastructure, part 1) 2019-08-08T16:10:51.516Z · score: 41 (22 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 41 (10 votes)
Does improved introspection cause rationalisation to become less noticeable? 2019-07-30T10:03:00.202Z · score: 28 (8 votes)
Prediction as coordination 2019-07-23T06:19:40.038Z · score: 46 (14 votes)
jacobjacob's Shortform Feed 2019-07-23T02:56:35.132Z · score: 18 (3 votes)
When does adding more people reliably make a system better? 2019-07-19T04:21:06.287Z · score: 35 (10 votes)
How can guesstimates work? 2019-07-10T19:33:46.002Z · score: 26 (8 votes)
Can we use ideas from ecosystem management to cultivate a healthy rationality memespace? 2019-06-13T12:38:42.809Z · score: 37 (7 votes)
AI Forecasting online workshop 2019-05-10T14:54:14.560Z · score: 32 (6 votes)
What are CAIS' boldest near/medium-term predictions? 2019-03-28T13:14:32.800Z · score: 35 (10 votes)
Formalising continuous info cascades? [Info-cascade series] 2019-03-13T10:55:46.133Z · score: 17 (4 votes)
How large is the harm from info-cascades? [Info-cascade series] 2019-03-13T10:55:38.872Z · score: 23 (4 votes)
How can we respond to info-cascades? [Info-cascade series] 2019-03-13T10:55:25.685Z · score: 15 (3 votes)
Distribution of info-cascades across fields? [Info-cascade series] 2019-03-13T10:55:17.194Z · score: 15 (3 votes)
Understanding information cascades 2019-03-13T10:55:05.932Z · score: 55 (19 votes)
Unconscious Economics 2019-02-27T12:58:50.320Z · score: 81 (32 votes)
How important is it that LW has an unlimited supply of karma? 2019-02-11T01:41:51.797Z · score: 30 (12 votes)
When should we expect the education bubble to pop? How can we short it? 2019-02-09T21:39:10.918Z · score: 41 (12 votes)
What is a reasonable outside view for the fate of social movements? 2019-01-04T00:21:20.603Z · score: 36 (12 votes)
List of previous prediction market projects 2018-10-22T00:45:01.425Z · score: 33 (9 votes)
Four kinds of problems 2018-08-21T23:01:51.339Z · score: 41 (19 votes)
Brains and backprop: a key timeline crux 2018-03-09T22:13:05.432Z · score: 89 (24 votes)
The Copernican Revolution from the Inside 2017-11-01T10:51:50.127Z · score: 145 (71 votes)

Comments

Comment by jacobjacob on March 24th: Daily Coronavirus Link Updates · 2020-03-26T20:37:24.522Z · score: 2 (1 votes) · LW · GW

We used parameters based on a paper modelling Wuhan, that found that ~2 day infectious period predicted spread the best.

Adding cumulative statistics is in the pipeline; I or one of the devs might get around to it today.

Comment by jacobjacob on How can we estimate how many people are C19 infected in an area? · 2020-03-19T20:43:20.974Z · score: 11 (7 votes) · LW · GW

There's currently a Foretold community attempting to answer this question here, using both general Guesstimate models and human judgement taking into account the nuances of each country. We've hired some superforecasters from Good Judgement who will start working on it in a few days.


(Tangential: as part of the Epidemic Forecasting project at FHI we are feeding this data into GLEAM, which is a global SEIR model running on high-performance computers, based on a database of millions of airline and commute connections. The model also tries to factor in in seasonality, air traffic reductions, and effectiveness of various containment measures.)

Comment by jacobjacob on Crisis and opportunity during coronavirus · 2020-03-17T17:28:46.904Z · score: 13 (3 votes) · LW · GW

After working on this for a week, with a team building a forecasting dashboard based on global pandemic modelling software, I should add an additional reason for why this is a good opportunity:

Access to resources

Software developers are an incredibly scarce resource, and they'll charge massive salaries compared to many other jobs. But over the last week, I've received numerous offers from devs who are willing to volunteer 15+ hours a week.

Human attention is also scarce and it's hard to contact people. But when our team reached out to more senior connections or collaborators, we've had a 100% reply rate.

If you're working on important covid-19 projects, there's an incredible number of people willing to help out at prices far below market rate.

Comment by jacobjacob on Coronavirus Open Thread · 2020-03-13T04:48:48.560Z · score: 3 (2 votes) · LW · GW

If this was the case it ought to be visible indirectly through its effect on Ohio's healthcare system. I haven't heard of such reports (and I do follow the situation fairly closely), but I haven't looked for them either.

Comment by jacobjacob on How do risks differ between locations? (re: Coronavirus) · 2020-03-09T11:20:16.673Z · score: 4 (2 votes) · LW · GW

I adapted Eli Tyre's model into a spreadsheet where you can calculate current number of cases in your country (by extrapolating from observed cases using some assumptions about doubling time and confirmation rate).

Comment by jacobjacob on Model estimating the number of infected persons in the bay area · 2020-03-09T10:07:05.588Z · score: 3 (2 votes) · LW · GW

I made a new version of your spreadsheet where you can select your location (from the John Hopkins list), instead of just looking at the Bay area.

Comment by jacobjacob on Model estimating the number of infected persons in the bay area · 2020-03-09T08:00:03.165Z · score: 6 (3 votes) · LW · GW

Whereas the local steps are fairly clear, after a quick read I found it moderately confusing what this model was doing at a high-level, and think some distillation could be helpful.

Comment by jacobjacob on Coronavirus: Justified Practical Advice Thread · 2020-03-08T12:05:41.514Z · score: 3 (2 votes) · LW · GW
There is a 5% chance of getting critical form of COVID (source: WHO report)

That's a 40-page report and quickly ctrl-f:ing "5 %" didn't find anything to corroborate your claim, so it would be helpful if you could elaborate on that.

Comment by jacobjacob on Blog Post Day (Unofficial) · 2020-02-18T19:10:49.220Z · score: 3 (2 votes) · LW · GW

What time zone will this be in?

There's a >20% chance I'll join. There's a much higher chance I'll show up to write some comments (which can also be an important thing).

I'm happy you're making this happen.

Comment by jacobjacob on [Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting · 2020-02-14T22:24:51.082Z · score: 3 (2 votes) · LW · GW
I think it's useful to be able to translate between different ontologies

This is one thing that is done very well by apps like Airtable and Notion, in terms of allowing you to show the same content in different ontologies (table / kanban board / list / calendar / pinterest-style mood board).

Similarly, when you’re using Roam for documents, you don’t have to decide upfront “Do I want to have high-level bullet-points for team members, or for projects?“. The ability to embed text blocks in different places means you can change to another ontology quite seamlessly later, while preserving the same content.

Ozzie Gooen pointed out to me that this is perhaps an abuse of terminology, since "the semantic data is the same, and that typically when 'ontology' is used for code environments, it describes what the data means, not how it’s displayed."

I think in response, the thing I'm pointing at that seems interesting is that there is a bit of a continuum between different displays and different semantic data — two “displays” which are easily interchangeable in Roam will not be in Docs or Workflowy, as they lack the “embed bullet-point” functionality. Even though superficially they’re both just bullet-point lists.

Comment by jacobjacob on Bayes-Up: An App for Sharing Bayesian-MCQ · 2020-02-07T07:14:02.266Z · score: 2 (1 votes) · LW · GW

So far about 30'000 questions have been answered by about 1'300 users since the end of December 2019.

That's a surprisingly high number of people. Curious where they came from?

Comment by jacobjacob on how has this forum changed your life? · 2020-02-02T20:04:13.597Z · score: 16 (5 votes) · LW · GW
If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you'll see the sort of discussion we tend to have best on LessWrong. I don't come here to get 'life-improvements' or 'self-help', I come here much more to be part of a small intellectual community that's very curious about human rationality.

I wanted to follow up on this a bit.

TLDR: While LessWrong readers tangentially care a lot about self-improvement, reading forums alone likely won't have a big effect on life success. But that's not really that relevant; the most relevant thing to look at is how much progress the community have done on the technical mathematical and philosophical questions it has focused most on. Unfortunately, that discussion is very hard to have without spending a lot of time doing actual maths and philosophy (though if you wanted to do that, I'm sure there are people who would be really happy to discuss those things).

___

If what you wanted to achieve was life-improvements, reading a forum seems like a confusing approach.

Things that I expect to work better are:

  • personally tailored 1-on-1 advice (e.g. seeing a sleep psychologist, a therapist, a personal trainer or a life coach)
  • working with great mentors or colleagues and learning from them
  • deliberate practice ― applying techniques for having more productive disagreements when you actually disagree with colleagues, implementing different productivity systems and seeing how well they work for you, regularly turning your beliefs into predictions and bets checking how well you're actually reasoning
  • taking on projects that step the right distance beyond your comfort zone
  • just changing whatever part of your environment makes things bad for you (changing jobs, moving to another city, leaving a relationship, starting a relationship, changing your degree, buying a new desk chair, ...)

And even then, realistic expectations for self-improvement might be quite slow. (Though the magic comes when you manage to compound such slow improvements over a long time-period.)

There's previously been some discussion here around whether being a LessWrong reader correlates which increased life success (see e.g. this and this).

As a community, the answer seems to be overwhelmingly positive. In the span of roughly a decade, people who combined ideas about how to reason under uncertainty with impartial altruistic values, and used those to conclude that it would be important to work on issues like AI alignment, have done some very impressive things (as judged by an outside perspective). They've launched billion dollar foundations, set up 30+ employee research institutes at some of the worlds most prestigious universities, and gotten endorsements from some of the world's richest and most influential people, like Elon Musk and Bill Gates. (NOTE: I'm going to caveat these claims below.)

The effects on individual readers are a more complex issue and the relevant variables are harder to measure. (Personally I think there will be some improvements in something like "the ability to think clearly about hard problems", but that that will largely stem from readers of LessWrong already being selected for being the kinds of people who are good at that.)

Regardless, like Ben hints at, this partly seems like the wrong metric to focus on. This is the caveat.

While interested in self-improvement, one of the key things people at LessWrong have been trying to get at is reasoning safely about super intelligences. To take a problem that's far in the future, where the stakes are potentially very high, where there is no established field of research, and where thinking about it can feel weird and disorienting... and still trying to do so in a way where you get to the truth.

So personally I think the biggest victories are some impressive technical progress in this domain. Like, a bunch of maths and much conceptual philosophy.

I believe this because I have my own thoughts about what seems important to work on and what kinds of thinking make progress on those problems. To share those with someone who haven't spent much time around LessWrong could take many hours of conversation. And I think often they would remain unconvinced. It's just hard to think and talk about complex issues in any domain. It would be similarly hard for me to understand why a biology PhD student thinks one theory is more important than another relying only on the merits of the theories, without any appeal to what other senior biologists think.

It's a situation where to understand why I think this is important someone might need to do a lot of maths and philosophy... which they probably won't do unless they already think it is important. I don't know how to solve that chicken-egg problem (except for talking to people who were independently curious about that kind of stuff). But my not being able to solve it doesn't change the fact that it's there. And that I did spend hundreds of hours engaging with the relevant content and now do have detailed opinions about it.

So, to conclude... people on LessWrong are trying to make progress on AI and rationality, and one important perspective for thinking about LessWrong is whether people are actually making progress on AI and rationality. I'd encourage you (Jon) to engage with that perspective as an important lens through which to understand LessWrong.

Having said that, I want to note that I'm glad that you seem to want to engage in good faith with people from LessWrong, and I hope you'll have some interesting conversations.

Comment by jacobjacob on The Loudest Alarm Is Probably False · 2020-01-25T21:11:45.882Z · score: 9 (4 votes) · LW · GW

I'd be quite curious about more concrete examples of systems where there is lots of pressure in *the wrong direction*, due to broken alarms. (Be they minds, organisations, or something else.) The OP hints at it with the consulting example, as does habryka in his nomination.

I strongly expect there to be interesting ones, but I have neither observed any nor spent much time looking.

Comment by jacobjacob on 2018 Review: Voting Results! · 2020-01-24T15:56:24.468Z · score: 10 (5 votes) · LW · GW

That seems like weak evidence of karma info-cascades: posts with more karma get more upvotes *simply because* they have more karma, in a way which ultimately doesn't correlate with their "true value" (as measured by the review process).

Potential mediating causes include users being anchored by karma, or more karma causing a larger share of the attention of the userbase (due to various sorting algorithms).

Comment by jacobjacob on Reality-Revealing and Reality-Masking Puzzles · 2020-01-16T22:58:29.050Z · score: 27 (10 votes) · LW · GW

Overall I'm still quite confused, so for my own benefit, I'll try to rephrase the problem here in my own words:

Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you'll finish your thesis next week if you just try hard enough.

But in general, simply taking out some mental stuff and inserting an equal amount of something else isn't necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms -- but often without any meta-level paradigm-shifting skills.

Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it's not an adequate response for me to say "do you want to double crux about that?" for the same reason that reading bible verses isn't adequate advice to a reluctant atheist tentatively hanging around church.

I don’t think all techniques are symmetric, or that there aren't ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.

But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”

I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna's example, finishing your PhD 4 months earlier). In fact, they've been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.

There can be many explanations of what's going on, and I'm not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.

I can imagine inside views that might generate discomfort like this.

  • "If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I'm not one of those I lose my ability to contribute to the world and the things I care about won’t matter."
  • "If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I'm lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don't know what to do about a problem."

I don't know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.

(Note: this commented was heavily edited for more clarity following some feedback)

Comment by jacobjacob on jacobjacob's Shortform Feed · 2020-01-14T21:08:48.584Z · score: 12 (3 votes) · LW · GW

I saw an ad for a new kind of pant: stylish as suit pants, but flexible as sweatpants. I didn't have time to order them now. But I saved the link in a new tab in my clothes database -- an Airtable that tracks all the clothes I own.

This crystallised some thoughts about external systems that have been brewing at the back of my mind. In particular, about the gears-level principles that make some of them useful, and powerful,

When I say "external", I am pointing to things like spreadsheets, apps, databases, organisations, notebooks, institutions, room layouts... and distinguishing those from minds, thoughts and habits. (Though this distinction isn't exact, as will be clear below, and some of these ideas are at an early stage.)

Externalising systems allows the following benefits...

1. Gathering answers to unsearchable queries

There are often things I want lists of, which are very hard to Google or research. For example:

  • List of groundbreaking discoveries that seem trivial in hindsight
  • List of different kinds of confusion, identified by their phenomenological qualia
  • List of good-faith arguments which are elaborate and rigorous, though uncertain, and which turned out to be wrong

etc.

Currently there is no search engine (but the human mind) capable of finding many of these answers (if I am expecting a certain level of quality). But for that reason researching the lists is also very hard.

The only way I can build these lists is by accumulating those nuggets of insight over time.

And the way I make that happen, is to make sure to have external systems which are ready to capture those insights as they appear.

2. Seizing serendipity

Luck favours the prepared mind.

Consider the following anecdote:

Richard Feynman was fond of giving the following advice on how to be a genius. [As an example, he said that] you have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: "How did he do it? He must be a genius!"

I think this is true far beyond beyond intellectual discovery. In order for the most valuable companies to exist, there must be VCs ready to fund those companies when their founders are toying with the ideas. In order for the best jokes to exist, there must be audiences ready to hear them.

3. Collision of producers and consumers

Wikipedia has a page on "Bayes theorem".

But it doesn't have a page on things like "The particular confusion that many people feel when trying to apply conservation of expected evidence to scenario X".

Why?

One answer is that more detailed pages aren't as useful. But I think that can't be the entire truth. Some of the greatest insights in science take a lot of sentences to explain (or, even if they have catchy conclusions, they depend on sub-steps which are hard to explain).

Rather, the survival of Wikipedia pages depends on both those who want to edit and those who want to read the page being able to find it. It depends on collisions, the emergence of natural Schelling points for accumulating content on a topic. And that's probably something like exponentially harder to accomplish the longer your thing takes to describe and search for.

Collisions don't just have to occur between different contributors. They must also occur across time.

For example, sometimes when I've had 3 different task management systems going, I end up just using a document at the end of the day. Because I can't trust that if I leave a task in any one of the systems, future Jacob will return to that same system to find it.

4. Allowing collaboration

External systems allow multiple people to contribute. This usually requires some formalism (a database, mathematical notation, lexicons, ...), and some sacrifice of flexibility (which grows superlinearly as the number of contributors grow).

5. Defining systems extensionally rather than intensionally

These are terms from analytic philosophy. Roughly, the "intension" of the concept "dog" is a furry, four-legged mammal which evolved to be friendly and cooperative with a human owner. The "extension" of "dog" is simply the set of all dogs: {Goofy, Sunny, Bo, Beast, Clifford, ...}

If you're defining a concept extensionally, you can simply point to examples as soon as you have some fleeting intuitive sense of what you're after, but long before you can articulate explicit necessary and sufficient conditions for the concept.

Similarly, an externalised system can grow organically, before anyone knows what it is going to become.

6. Learning from mistakes

I have a packing list database, that I use when I travel. I input some parameters about how long I'll be gone and how warm the location is, and it'll output a checklist for everything I need to bring.

It's got at least 30 items per trip.

One unexpected benefit from this, is that whenever I forget something -- sunglasses, plug converters, snacks -- I have a way to ensure I never make that mistake again. I simply add it to my database, and as long as future Jacob uses the database, he'll avoid repeating my error.

This is similar to Ray Dalio's Principles. I recall him suggesting that the act of writing down and reifying his guiding wisdom gave him a way to seize mistakes and turn them into a stronger future self.

This is also true for the Github repo of the current project I'm working on. Whenever I visit our site and find a bug, I have a habit of immediately filing an issue, for it to be solved later. There is a pipeline whereby these real-world nuggets of user experience -- hard-worn lessons from interacting with the app "in-the-field", that you couldn't practically have predicted from first principles -- get converted into a better app. So, whenever a new bug is picked up by me or a user, in addition to annoyance, it causes a little flinch of excitement (though the same might not be true for our main developer...). This also relates to the fact that we're dealing in code. Any mistake can be improved in such a way that no future user will experience it.

Comment by jacobjacob on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T20:40:15.490Z · score: 2 (1 votes) · LW · GW

For some reason seeing all this concreteness made me more excited/likely to try this technique.

Comment by jacobjacob on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T09:52:59.852Z · score: 6 (3 votes) · LW · GW

I'm curious, could you share more details about what patterns you observed, and which heuristics you actually seemed to use?

Comment by jacobjacob on Voting Phase of 2018 LW Review · 2020-01-11T23:31:31.904Z · score: 15 (7 votes) · LW · GW

I voted in category mode, and am some way through fine-tuning in quadratic mode.

Comment by jacobjacob on What is Life in an Immoral Maze? · 2020-01-07T10:49:24.967Z · score: 2 (1 votes) · LW · GW

It is common for people who quit (based on personal experiences of friends) to have no idea how to actually do real object-level work

I'm quite surprised by this but don't find it entirely implausible.

Concretely, what evidence caused you to believe it? I'm curious about data (anecdotes, studies, experience, ...) rather than models.

Comment by jacobjacob on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T08:38:10.843Z · score: 27 (8 votes) · LW · GW

Check the section called "derivations" here: it links to a document attempting to list all conceptual breakthroughs in AI, of at least a certain significance, ever http://mediangroup.org/insights with related discussion on forecasting implications here: https://ai.metaculus.com/questions/2920/will-the-growth-rate-of-conceptual-ai-insights-remain-linear-in-2019/

Comment by jacobjacob on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2020-01-05T10:51:07.467Z · score: 3 (2 votes) · LW · GW

Question 3

It seems like Ozzie is answering on a more abstract level than the question was asked. There's a difference between "How valuable will it be to answer question X?" (what Ozzie said) and "How outsourceable is question X?" (what Lawrence's question was related to).

I think that outsourceability would be a sub-property of Tractability.

In more detail, some properties I imagine to affect outsourceability, are whether the question:

1) Requires in-depth domain knowledge/experience

2) Requires substantial back-and-forth between question asker and question answerer to get the intention right

3) Relies on hard-to-communicate intuitions

4) Cannot easily be converted into a quantitative distribution

5) Has independent subcomponents which can be answered separately and don't rely on each other to be answered (related to Lawrence point about tractability)

Comment by jacobjacob on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2020-01-05T10:27:47.817Z · score: 2 (1 votes) · LW · GW

I'll try to paraphrase you (as well as extrapolating a bit) to see if I get what you're saying:

Say you want some research done. The most straightforward way to do so is to just hire a researcher. This "freeform" approach affords a lot of flexibility in how you delegate, evaluate, communicate, reward and aggregate the research. You can build up subtle, shared intuitions with your researchers, and invest a lot of effort in your ability to communicate nuanced and difficult instructions. You can also pick highly independent researchers who are able to make many decisions for themselves in terms of what to research, and how to research it.
By using "amplification" schemes and other mechanisms, you're placing significant restrictions on your ability to do all of those things. Hence you better get some great returns to compensate.
But looking through various ways you might get these benefits, they all seem at best... fine.
Hence the worry is that despite all the bells-and-whistles, there's actually no magic happening. This is just like hiring a researcher, but a bit worse. This is only "amplification" in a trivial sense.
As a corollary, if your research needs seem to be met by a handful in-house researchers, this method wouldn't be very helpful to you.

1) Does this capture your views?

2) I'm curious what you think of the sections: "Mitigating capacity bottlenecks" and "A way for intellectual talent to build and demonstrate their skills"?

In particular, I didn't feel like your comment engaged with A) the scalability of the approach, compared to the freeform approach, and B) that it might be used as a "game" for young researchers to build skills and reputation, which seems way harder to do with the freeform approach.

Comment by jacobjacob on [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration · 2020-01-05T09:37:18.317Z · score: 2 (1 votes) · LW · GW

Yes.

Curious why you think it's important?

I think that what's important is 1) the opportunity cost of the time, rather than the actual number of minutes, and 2) the fact that Elizabeth's work can be outsourced/parallelised at all, even if it takes others a bit longer than her.

Comment by jacobjacob on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2020-01-05T09:33:35.760Z · score: 8 (2 votes) · LW · GW

It might interest you that there's quite a nice isomorphism between prediction markets and ordinary forecasting tournaments.

Suppose you have some proper scoring rule for predictions on outcome . For example, in our experiment we used . Now suppose the :th prediction is paid the difference between their score and the score of the previous participant: . Then you basically have a prediction market!

To make this isomorphism work, the prediction market must be run by an automated market maker which buys and sells at certain prices which are predetermined by a particular formula.

To see that, let be the total cost of buying shares in some possibility (e.g. Yes or No). If the event happens, your payoff will be (we're assuming that the shares just pay $1 if the event happens and $0 otherwise). It follows that the cost of buying further shares -- the market price -- is .

We require that the market prices can be interpreted as probabilities. This means that the prices for all MECE outcomes must sum to 1, i.e. .

Now we set your profit from buying x shares in the prediction market be equal to your payout in the forecasting tournament, . Finally, we solve for , which specifies how the automated market maker must make its trades. Different scoring rules will give you different . For example, a logarithmic scoring rule will give: .

For more details, see page 54 in this paper, Section 5.3, "Cost functions and Market Scoring Rules".

Comment by jacobjacob on [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration · 2020-01-03T17:53:41.857Z · score: 2 (1 votes) · LW · GW

For the cost-effectiveness modelling, we estimated the time per claim as a lognormal with 90% confidence interval 10 to 70 minutes and mean of 30 minutes. This was based on survey data from participants.

Comment by jacobjacob on 2020's Prediction Thread · 2019-12-31T07:39:06.415Z · score: 11 (6 votes) · LW · GW

As a Schelling point, you can use this Foretold community which I made specifically for this thread.

Comment by jacobjacob on Perfect Competition · 2019-12-29T22:17:22.592Z · score: 7 (4 votes) · LW · GW

User feedback to Zvi: I skimmed the first half of the post and then quit because it seemed to be just reiterating standard Moloch stuff. But this comment excited me a lot and now makes me want to read the post again to get more detailed models behind Jim's summary.

Comment by jacobjacob on Perfect Competition · 2019-12-29T22:15:39.746Z · score: 2 (1 votes) · LW · GW

Given this (which I'd bet Zvi is well aware of), I'm quite confused about what the Disneyland sentence is supposed to mean; and I'd be curious for Zvi to clarify.

Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-27T14:37:29.275Z · score: 14 (5 votes) · LW · GW

I made a Foretold notebook for predicting which posts will end up in the Best of 2018 book, following the LessWrong review.

You can submit your own predictions as well.

At some point I might write a longer post explaining why I think having something like "futures markets" on these things can create a more "efficient market" for content.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T21:17:18.786Z · score: 4 (2 votes) · LW · GW

I think that it's not just about having an easier time reverse engineering people's values from their votes. It might be deeper. Different rules might cause different equilibria/different proposals to win, etc. However I'm not sure and should probably just read the paper to find out the details.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T17:28:56.858Z · score: 6 (3 votes) · LW · GW

If you scale it by a constant that will happen (as the constant will just stick around in the derivative, and so you'll buy votes until marginal cost = marginal benefit / ).

If you were to use like then each marginal vote would cost , and so you'd buy a number of votes such that (where is your marginal benefit).

Some of the QV papers have uniqueness proofs that quadratic voting is the only voting scheme that satisfies some of their desiderata for optimality. I haven't read it and don't know exactly what it shows.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T17:22:11.206Z · score: 4 (2 votes) · LW · GW

Trying to allocate your budget truthfully in accordance with your preferences about posts != trying to game the rules as an unbounded EV-maximiser would.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-22T06:47:06.402Z · score: 13 (3 votes) · LW · GW

Robin Hanson makes a similar point here.

However, I'm not sure what sorts of collusion you're worried about for this round (but haven't though much about it)?

My understanding is that collusion in QV looks like:

  • 1. People hijacking what bills get put up for vote in order to bankrupt people who want to veto the bill
  • 2. People splitting their funding contributions across multiple fake identities in order to extract more subsidies
  • 3. People coordinating their votes with others (because rather than me buying x votes it's cheaper that I only buy x-y and "pay for that" by spending money on someone else's preferences)

1 and 2 won't be a problem for the review since you have a set number of voters with known identities, as well as a set number of posts to vote on. So I presume you're worried about vote trading as in 3?

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T15:46:42.245Z · score: 6 (3 votes) · LW · GW

I don't expect everyone to vote strategically. In fact, I expect most users to act in good-faith and do their best. I still think these things can be a problem.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T14:08:34.954Z · score: 5 (3 votes) · LW · GW

I think a solution to this might be that if instead of voting on what should be in the book, you decide on some subsidy pile of karma+money, and you use quadratic funding to decide how to allocate that pile to each post (and then giving it to the author/eventual co-authors).

You might just include the top posts in this scheme in the book, also making sure to make their scores prominent (and perhaps using scores in other ways to allocate attention inside the book, e.g. how many comments you include).

It seems more plausible to me that under this scheme users utility would be linear in the amount of votes/amount of funding they allocate to posts.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T13:58:07.617Z · score: 4 (2 votes) · LW · GW

Don't think that would help -- instead of knowing the actual votes for post , I would have some distribution over the votes cast for , and my intuition is that as long as I have sufficient probability mass above the threshold, it would skew my incentives.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T13:48:24.856Z · score: 7 (4 votes) · LW · GW

Nitpick: quadratic voting and quadratic funding are technically different schemes. In the former you vote for specific bills that can either pass or not pass. In the latter you fund projects and your donations are matched according to a particular formula.

However, there is a close correspondence between them. One way to see it is as follows. Quadratic funding can be seen as a vote using the quadratic voting scheme on the following bill:

Voter will distribute $X to a certain project.

and the quadratic funding subsidy formula is the maximum X for which Voter 0 will not pay to stop the bill.

In more detail: to prevent the bill, Voter must buy more votes than all the other voters who voted against them. That is, . If we use the cost-function , each of the other voters paid for their votes. This means that in order to prevent the bill from passing Voter must pay . But this is exactly the subsidy formula from quadratic funding.

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T12:56:28.532Z · score: 24 (6 votes) · LW · GW
  • An issue with the proposal is the failure of the assumption of utility of votes being proportional to number of votes.
    • It seems plausible to me that there's some threshold of votes above which a post will very likely end up in the book. If this is true, then I think my utility in buying more votes for that post would be ~linear up until that threshold, and then flat (ignoring potential down-voters).
      • If this is the case, I'd significantly understate my preference for this post, instead spending my points elsewhere.
        • So, for example, Embedded Agency might be undervalued by this scheme.
      • This point isn't relevant when thinking about quadratic voting for elections with millions of voters, since then it makes more sense to assume that the probability of passing a proposal is linear in the number of votes I can influence.
      • Moreover, this would lead to weird equilibrium dynamics...
        • If I've voted for a post that gets above the threshold, then I want to remove my votes and place them elsewhere. If I don't do this, but other users do, then I am effectively subsidising their preferences.
        • I don't know how this would pan out, and can see it messing things up as everyone tries to model everyone else and be clever.
      • It seems like an open numerical question whether this issue would be relevant for the current round (i.e. whether the utility of most users would be linear in the region of influence they could expect with their votes):
        • Here are some numbers that I wrote down but now don't really know how to take this further. Under Ben's initial scheme, each user can buy at most ~32 votes per post, by spending all their money. There are ~500 voting users. Which gives an upper bound of ~16000 votes for a post. There are ~75 nominated posts, out of which ~25 will end up in the book. If all users distribute their votes uniformly, we'd have about ~3.65 votes per user per post, and ~1800 votes per post. Let's handwave and say that with 7000+ votes a post is as good as guaranteed for the book.
Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T12:01:02.976Z · score: 21 (7 votes) · LW · GW

Here's an intuition for why it's important that it's quadratic (based on standard microeconomic reasoning).

By spending your votes, you pay some cost, and get some benefit.

The cost consists in: voting credits, time, attention, energy, reputation costs if you have odd views...

The benefit consists in: higher probability your post-of-choice ends up in the book, which comes with a host of externalities like further influence of your values and epistemics on readers of the book.

As long as cost < benefit, you want to keep voting (otherwise you'd be leaving benefits on the table). You'll do this until the cost of the last vote = benefit of the last vote.

If your total cost of voting is then the cost of each marginal vote is . By the above reasoning, you'll stop voting when your marginal benefit = marginal cost = V.

Hence, your distribution of votes across options will measure how much you value each option being included in the book.

(After I'd written this I found this blog post by Vitalik which explains it even better.)

Comment by jacobjacob on (Feedback Request) Quadratic voting for the 2018 Review · 2019-12-21T11:14:00.251Z · score: 5 (2 votes) · LW · GW

I assumed Tenoke was referring to the stated plan in the initial review post:

The details of this are still being fleshed out, but the current plan is:
Users with 1000+ karma rate each post on a 1-10 scale, with 6+ meaning "I'd be happy to see this included in the 'best of 2018'" roundup, and 10 means "this is the best I can imagine"
Comment by jacobjacob on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2019-12-21T08:11:37.386Z · score: 1 (2 votes) · LW · GW

I'm afraid I don't understand your question, could you clarify?

Comment by jacobjacob on We run the Center for Applied Rationality, AMA · 2019-12-20T10:30:28.854Z · score: 36 (14 votes) · LW · GW

Is there something you find yourselves explaining over and over again in person, and that you wish you could just write up in an AMA once and for all where lots of people will read it, and where you can point people to in future?

Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-13T08:24:18.201Z · score: 6 (3 votes) · LW · GW

1. I did think about that when I wrote it, and it's a bit strong. (I set myself a challenge to write and publish this in 15 min, so didn't spent any more time optimising the title.) Other recommendations welcome. Thinking about the actual claim though, I find myself quite confident that something in this direction is right. (A larger uncertainty would be if it is the best thing for us to sink resources into, compared to other interventions).

2. Agree that there seems to be lots of black-box wisdom embedded in the institutions and practices of religions, and could be cool to try to unwrap it and import some good lessons.

I will note though that there's a difference between:

  • the Sunday sermon thing (which to me seems more useful for building common knowledge, community, and a sense of mission and virtue).
  • the gym idea, which is much more about deliberate practice, starting from wherever you're currently at
Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-13T08:18:55.928Z · score: 2 (1 votes) · LW · GW

I haven't been to a dojo (except briefly as a kid) so don't have a clear model what it's about.

Not sure how I feel about the part on "you must face off against an opponent, and you run the risk of getting hurt". I think I disagree, and might write up why later.

Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-12T18:08:42.097Z · score: 2 (1 votes) · LW · GW

Thanks for describing that! Some questions:

1) What are some examples of what "practicing CFAR techniques" looks like?

2) To what extent are dojos expected to do "new things" vs. repeated practice of a particular thing?

For example, I'd say there's a difference between a gym and a... marathon? match? I think there's more of the latter in the community at the moment: attempting to solve particular bugs using whatever means are necessary.

Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-12T16:02:44.653Z · score: 2 (1 votes) · LW · GW

I didn't know about weekly dojos and have never attended any, that sounds very exciting. Tell me more about what happens at the Berlin weekly dojo events?

Also, to clarify, I meant both "pubs" and "gyms" metaphorically -- i.e. lots of what happens on LessWrong is like a pub in the above sense, whereas other things, like the recent exercise prize, is like a gym.

Comment by jacobjacob on jacobjacob's Shortform Feed · 2019-12-12T12:34:36.262Z · score: 25 (9 votes) · LW · GW

Rationality has pubs; we need gyms

Consider the difference between a pub and a gym.

You go to a pub with your rationalist friends to:

  • hang out
  • discuss interesting ideas
  • maybe maths a bit in a notebook someone brought
  • gossip
  • get inspired about the important mission you're all on
  • relax
  • brainstorm ambitious plans to save the future
  • generally have a good time

You go to a gym to:

  • exercise
  • that is, repeat a particular movement over and over, paying attention to the motion as you go, being very deliberate about using it correctly
  • gradually trying new or heavier moves to improve in areas you are weak in
  • maybe talk and socialise -- but that is secondary to your primary focus of becoming stronger
  • in fact, it is common knowledge that the point is to practice, and you will not get socially punished for trying really hard, or stopping a conversation quickly and then just focus on your own thing in silence, or making weird noises or grunts, or sweating... in fact, this is all expected
  • not necessarily have a good time, but invest in your long-term health, strength and flexibility

One key distinction here is effort.

Going to a bar is low effort. Going to a gym is high effort.

In fact, going to gym requires such a high effort that most people have a constant nagging guilt about doing it. They proceed to set up accountability systems with others, hire personal trainers, use habit installer apps, buy gym memberships as commitment devices, use clever hacks to always have their gym bag packed and ready to go, introspect on their feelings off anxiety about it and try to find work-arounds or sports which suit them, and so forth...

People know gyms are usually a schlep, yet they also know going there is important, so they accept that they'll have to try really hard to build systems which get them exercising.

However, things seem different for rationality. I've often heard people go "this rationality stuff doesn't seem very effective, people just read some blog posts and go to a workshop or two, but don't really seem more effective than other mortals".

But we wouldn't be surprised if someone said "this fitness stuff doesn't seem very effective, some of my friends just read some physiology bloggers and then went to a 5-day calisthenics bootcamp once, but they're not in good shape at all". Of course they aren't!

I think I want to suggest two improvements:

1) On the margin, we should push more for a cultural norm of deliberate practice in the art of rationality.

It should be natural to get together with your friends once a week and use OpenPhil's calibration app, do Thinking Physics problems, practice CFAR techniques, etc...

2) But primarily: we build gyms.

Gyms are places where hundreds of millions of dollars of research have gone into designing equipment specifically allowing you to exercise certain muscles. There are also changing rooms to help you get ready, norms around how much you talk (or not) to help people focus, personal trainers who can give you advice, saunas and showers to help you relax afterwards...

For rationality, we basically have nothing like this [1]. Each time you want to practice rationality, you basically have to start by inventing your own exercises.

[1] The only example I know of is Kocherga, which seems great. But I don't know a lot about what they're doing, and ideally which should have rationality gyms either online or in every major hub, not just Moscow.

Comment by jacobjacob on ozziegooen's Shortform · 2019-12-11T16:55:38.609Z · score: 5 (3 votes) · LW · GW

In some sense, markets have a particular built-in interpretability: for any trade, someone made that trade, and so there is at least one person who can explain it. And any larger market move is just a combination of such smaller trades.

This is different from things like the huge recommender algorithms running YouTube, where it is not the case that for each recommendation, there is someone who understands that recommendation.

However, the above argument fails in more nuanced cases:

  • Just because for every trade there's someone who can explain it, doesn't mean that there is a particular single person who can explain all trades
  • Some trades might be made by black-box algorithms
  • There can be weird "beauty contest" dynamics where two people do something only because the other person did it
Comment by jacobjacob on Applications of Economic Models to Physiology? · 2019-12-11T15:08:34.473Z · score: 7 (4 votes) · LW · GW

IIRC neuroeconomics is quite different: it studies how humans make and represent economic decisions (eg "we've found an fmri signal in the orbitofrontal cortex that's correlated with expected value of this decision"), which is different from modelling the internal physiologial functions of a body as an entire economy with various supply chains and equilibrium states.