Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:37:09.695Z · score: 3 (2 votes) · LW · GW

In the broader economy, it's not the case that "If buying things reduced your income, people stop buying things, and eventually money stops flowing altogether".

So the only way that makes sense to me is if you model content as a public good which no user is incentivised to contribute to maintaining.

Speculatively, this might be avoided if votes were public: because then voting would be a costly signal of one's epistemic values or other things.

Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:32:22.489Z · score: 1 (1 votes) · LW · GW
though I'm not sure how that is calculateed from one's karma

I believe it's proportional to the log of your user karma. But I'm not sure.

One can get high karma from a small amount of content that a small number of sufficiently high karma users that double up vote it.

There is still an incentive gradient towards "least publishable units".

Suppose you have a piece of work worth 18 karma to high-karma user U. However, U's strong upvote is only worth 8 karma.

If you just post one piece of work, you get 8 karma. If you split your work into three pieces, each of which U values at 6 karma, you're better off. U might strong-upvote all of them (they'd rather allocate a little too much karma than way too little), and you get 24 karma.

To the extend the metaphor in the original question: maybe if the world economy ran on the equivalent of strong upvotes there would still be cars around, yet no one could buy airplanes.

Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:17:48.260Z · score: 1 (1 votes) · LW · GW

Do you have details on when and why that was removed? Or past posts discussing that system?

How important is it that LW has an unlimited supply of karma?

2019-02-11T01:41:51.797Z · score: 28 (10 votes)
Comment by jacobjacob on The Case for a Bigger Audience · 2019-02-10T00:21:08.240Z · score: 5 (3 votes) · LW · GW

I was going to say 2000 times sounded like way too much, but making the guesstimates that means on average using "common knowledge" once every other day since it was published, and "out to get you" once every third day, and that does seem consistent with my experience hanging out with you (though of course with a fat tail of the distribution, using some concepts like 10 times in a single long hangout).

Comment by jacobjacob on When should we expect the education bubble to pop? How can we short it? · 2019-02-10T00:06:57.591Z · score: 20 (9 votes) · LW · GW

Asset bubbles can be Nash equilibria for a while. This is a really important point. If surrounded by irrational agents, it might be rational to play along with the bubble instead of shorting and waiting. "The market can stay irrational longer than you can stay solvent."

For most of 2017, you shouldn't have shorted crypto, even if you knew it would eventually go down. The rising markets and the interest on your short would kill you. It might take big hedge funds with really deep liquidity to ride out the bubble, and even they might not be able to make it if they get in too early. In 2008 none of the investment banks could short things early enough because no one else was doing it.

The difference between genius (shorting at the peak) and really smart (shorting pre-peak) matters a lot in markets. (There's this scene in the Big Short where some guy covers the cost of his BBB shorts by buying a ton of AAA-rated stuff, assuming that at least those will keep rising.)

So shorting and buying are not symmetric (as you might treat them in a mathematical model, only differing by the sign on the quantity of assets bought). Shorting is much harder and much more dangerous.

In fact, my current model [1] is that this is the very reason financial markets can exhibit bubbles of "irrationality" despite all their beautiful properties of self-correction and efficiency.

[1] For transparency, I basically downloaded this model from davidmanheim.

When should we expect the education bubble to pop? How can we short it?

2019-02-09T21:39:10.918Z · score: 40 (11 votes)
Comment by jacobjacob on X-risks are a tragedies of the commons · 2019-02-09T21:00:03.333Z · score: 5 (3 votes) · LW · GW

In case others haven't seen it, here's a great little matrix summarising the classification of goods on "rivalry" and "excludability" axes.

Comment by jacobjacob on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-05T10:07:42.749Z · score: 3 (2 votes) · LW · GW

Hanson's speed-weighted voting reminds me a bit of quadratic voting.

Comment by jacobjacob on What are some of bizarre theories based on anthropic reasoning? · 2019-02-05T09:40:54.184Z · score: 3 (2 votes) · LW · GW

I presume that, unlike X-risk, s-risks don't remove the vast majority of observer moments.

Comment by jacobjacob on Announcement: AI alignment prize round 4 winners · 2019-01-22T16:56:42.368Z · score: 12 (5 votes) · LW · GW

I disagree with the view that it's bad to spend the first few months prizing top researchers who would have done the work anyway. This _in and of itself_ is cleary burning cash, yet the point is to change incentives over a longer time-frame.

If you think research output is heavy-tailed, what you should expect to observe is something like this happening for a while, until promising tail-end researchers realise there's a stable stream of value to be had here, and put in the effort required to level up and contribute themselves. It's not implausible to me that would take a >1 year of prizes.

Expecting lots of important counterfactual work, that beats the current best work, to be come out of the woodwork within ~6 months seems to assume that A) making progress on alignment is quite tractable, and B) the ability to do so is fairly widely distributed across people; both to a seemingly unjustified extent.

I personally think prizes should be announced together with precommitments to keep delivering them for a non-trivial amount of time. I believe this because I think changing incentives involves changing expectations, in a way that changes medium-term planning. I expect people to have qualitatively different thoughts if their S1 reliably believes that fleshing out the-kinds-of-thoughts-that-take-6-months-to-flesh-out will be reward after those 6 months.

That's expensive, in terms of both money and trust.

Comment by jacobjacob on What are good ML/AI related prediction / calibration questions for 2019? · 2019-01-16T17:56:20.563Z · score: 10 (3 votes) · LW · GW

elityre has done work on this for BERI, suggesting >30 questions.

Regarding the question metatype, Allan Dafoe has offered a set of desiderata in the appendix to his AI governance research agenda.

Comment by jacobjacob on Why is so much discussion happening in private Google Docs? · 2019-01-12T11:17:04.531Z · score: 11 (5 votes) · LW · GW

If true, sounds like a bug and not a feature of lw.

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-10T23:36:20.535Z · score: 3 (2 votes) · LW · GW

So: habryka did say "anyone" in the original description, and so he will pay both respondents who completed the bounty according to original specifications (which thereby excludes gjm). I will only pay Radamantis as I interpreted him as "claiming" the task with his original comment.

I suggest you pm with payment details.

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-10T23:10:03.202Z · score: 1 (1 votes) · LW · GW

I'll PM habryka about what to do with the bounty given that there were two respondents.

Overall I'm excited this data and analysis was generated and will sit down to take a look and update his weekend. :)

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-07T19:37:29.714Z · score: 2 (2 votes) · LW · GW

What's your "reasonable sounding metric" of success?

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-07T02:44:56.319Z · score: 19 (6 votes) · LW · GW

I add $30 to the bounty.

There are 110 items in the list. So 25% is ~28.

I hereby set the random seed as whatever will be the last digit and first two decimals (3 digits total) of the S&P 500 Index price on January 7, 10am GMT-5, as found in the interactive chart by Googling "s&p500".

For example, the value of the seed on 10am January 4 was "797".

[I would have used the NIST public randomness beacon (v2.0) but it appears to be down due to government shutdown :( ].

Instructions for choosing the movements

Let the above-generated seed be n.

Using Python3.0:

import random
random.seed(n)
indices = sorted(random.sample([i for i in range(1,111)], 28))
Comment by jacobjacob on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T00:41:19.707Z · score: 3 (2 votes) · LW · GW

I'm confused: why doesn't variability cause any trouble in the standard models? It seems that if producers are risk-averse, it results in less production than otherwise.

What is a reasonable outside view for the fate of social movements?

2019-01-04T00:21:20.603Z · score: 36 (12 votes)
Comment by jacobjacob on Reinterpreting "AI and Compute" · 2018-12-27T16:17:15.242Z · score: 1 (1 votes) · LW · GW

I'm confused. Do you mean "worlds" as in "future trajectories of the world" or as in "subcommunities of AI researchers"? And what's a concrete example of gains from trade between worlds?

Comment by jacobjacob on Against Tulip Subsidies · 2018-12-20T05:10:52.381Z · score: 1 (1 votes) · LW · GW

The link to "many rigorous well-controlled studies" is broken.

List of previous prediction market projects

2018-10-22T00:45:01.425Z · score: 33 (9 votes)
Comment by jacobjacob on Some cruxes on impactful alternatives to AI policy work · 2018-10-13T21:49:30.228Z · score: 10 (3 votes) · LW · GW

Suppose your goal is not to maximise an objective, but just to cross some threshold. This is plausibly the situation with existential risk (e.g. "maximise probability of okay outcome"). Then, if you're above the threshold, you want to minimise variance, whereas if you're below it, you want to maximise variance. (See this for a simple example of this strategy applied to a game.) If Richard believes we are currently above the x-risk threshold and Ben believes we are below it, this might be a simple crux.

Comment by jacobjacob on Some cruxes on impactful alternatives to AI policy work · 2018-10-13T18:24:10.107Z · score: 17 (8 votes) · LW · GW
However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway).

I think this is less true for startups than for scientific discoveries, because of bad Nash equilibrium stemming from founder effects. The objective which Google is maximising might not be concave. It might have many peaks, and which you reach might be quite arbitrarily determined. Yet the peaks might have very different consequences when you have a billion users.

For lack of a concrete example... suppose a webapp W uses feature x, and this influences which audience uses the app. Then, once W has scaled and depend on that audience for substantial profit they can't easily change x. (It might be that changing x to y wouldn't decrease profit, but just not increase it.) Yet, had they initially used y instead of x, they could have grown just as big, but they would have had a different audience. Moreover, because of network effects and returns to scale, it might not be possible for a rivalling company to build their own webapp which is basically the same thing but with y instead.

Comment by jacobjacob on Four kinds of problems · 2018-09-20T18:30:10.125Z · score: 3 (2 votes) · LW · GW

I don't want to base my argument on that video. It's based on the intuitions for philosophy I developed doing my BA in it at Oxford. I expect to be able to find better examples, but don't have the energy to do that now. This should be read more as "I'm pointing at something that others who have done philosophy might also have experienced", rather than "I'm giving a rigorous defense of the claim that even people outside philosophy might appreciate".

Comment by jacobjacob on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-10T20:50:33.109Z · score: 1 (1 votes) · LW · GW

That still seems too vague to be useful. I don't have the slack to do the work of generating good examples myself at the moment.

Comment by jacobjacob on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-08T23:20:35.268Z · score: 16 (8 votes) · LW · GW

This would have been more readable if you gave concrete examples of each kind of problem. It seems like your claim might be a useful dichotomy, but in its current state it's likely not going to cause me to analyse problems differently or take different actions.

Comment by jacobjacob on Quick Thoughts on Generation and Evaluation of Hypotheses in a Community · 2018-09-07T12:25:50.197Z · score: 16 (6 votes) · LW · GW

Somewhat tangential, but...

You point to the following process:

Generation --> Evaluation --> Acceptance/rejection.

However, generation is often risky, and not everyone has the capacity to absorb that risk. For example, one might not have the exploratory space needed to pursue spontaneous 5-hour reading sprints while working full-time.

Hence, I think much of society looks like this:

Justification --> Evaluation --> Generation --> Evaluation --> Acceptance/rejection.

I think we some very important projects never happen because the people who have taken all the inferential steps necessary to understand them are not the same as those evaluating them, and so there's an information-asymmetry.

Here's PG:

That's one of California's hidden advantages: the mild climate means there's lots of marginal space. In cold places that margin gets trimmed off. There's a sharper line between outside and inside, and only projects that are officially sanctioned — by organizations, or parents, or wives, or at least by oneself — get proper indoor space. That raises the activation energy for new ideas. You can't just tinker. You have to justify.

(This is one of the problems I model impact certificates as trying to solve.)

Comment by jacobjacob on Musings on LessWrong Peer Review · 2018-09-06T22:45:15.597Z · score: 5 (3 votes) · LW · GW

Have you written any of the upcoming posts yet? If so, can you link them?

Comment by jacobjacob on History of the Development of Logical Induction · 2018-09-02T00:07:24.914Z · score: 5 (3 votes) · LW · GW

How causally important was Dutch-book theorems in suggesting to you that market behaviour could be used for logical induction? This seems like the most "non sequitur" part of the story. Suddenly, what seems like a surprising insight was just there.

I predict somewhere between "very" and "crucially" important.

Four kinds of problems

2018-08-21T23:01:51.339Z · score: 41 (19 votes)
Comment by jacobjacob on Y Couchinator · 2018-08-21T22:01:35.348Z · score: 20 (8 votes) · LW · GW
I am surprised how much free energy I was able to give people [to stay at my place]

That seems like it might be one of the secrets AirBnB was built on.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-13T17:12:05.884Z · score: 10 (2 votes) · LW · GW
I don't understand why can't you just have some neurons which represent the former, and some neurons which represent the latter?

Because people thought you needed the same weights to 1) transport the gradients back, 2) send the activations forward. Having two distinct networks with the same topology and getting the weights to match was known as the "weight transport problem". See Grossberg, S. 1987. Competitive learning: From interactive activation to adaptive resonance. Cognitive science 11(1):23–63.

Do you have any particular source for dropout being replaced by batch normalisation, or is it an impression from the papers you've been reading?

The latter.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-11T12:30:05.310Z · score: 4 (1 votes) · LW · GW

Thanks, I'm glad you found the framing useful.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-11T12:27:34.599Z · score: 6 (2 votes) · LW · GW

Yes, there is a poorly understood phenomenon whereby action potentials sometimes travel back through the dendrites preceding them. This is insufficient for ML-style backprop because it rarely happens across more than one layer.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-11T00:30:12.791Z · score: 17 (4 votes) · LW · GW

Thanks for taking the time to write that up.

I updated towards a "fox" rather than "hedgehog" view of what intelligence is: you need to get many small things right, rather than one big thing. I'll reply later if feel like I have a useful reply.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-11T00:16:30.157Z · score: 4 (1 votes) · LW · GW

The thing is now in LaTeX! Beautiful!

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-10T20:41:57.761Z · score: 10 (2 votes) · LW · GW

It seems very implausible to me that the brain would use evolutionary strategies, as it's not clear how humans could try a sufficiently large number of parameter settings without any option for parallelisation, or store and then choose among previous configurations.

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-10T12:09:09.214Z · score: 21 (4 votes) · LW · GW

In order for me to update on this it would be great to have concrete examples of what does and does not consistute "nontrivial theoretical insights" according to you and Paul.

E.g. what was the insight from the 1980s? And what part of the AG(Z) architecture did you initially consider nontrivial?

Comment by jacobjacob on Brains and backprop: a key timeline crux · 2018-03-10T11:57:56.375Z · score: 21 (4 votes) · LW · GW

I'm looking forward to reading that post.

Yes, it seems right that gradient descent is the key crux. But I'm not familiar with any efficient way of doing it that the brain might implement, apart from backprop. Do you have any examples?

Brains and backprop: a key timeline crux

2018-03-09T22:13:05.432Z · score: 88 (23 votes)
Comment by jacobjacob on God Help Us, Let’s Try To Understand Friston On Free Energy · 2018-03-09T17:38:58.775Z · score: 10 (2 votes) · LW · GW

It would be interesting if anyone knows of historical examples where someone had a key insight, but nonetheless fulfilled your "emperor has no clothes" criteria.

Comment by jacobjacob on God Help Us, Let’s Try To Understand Friston On Free Energy · 2018-03-09T17:31:58.675Z · score: 19 (4 votes) · LW · GW

I'm confused so I'll comment a dumb question hoping my cognitive algorithms are sufficiently similar to other LW:ers, such that they'll be thinking but not writing this question.

"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"

Where does this reasoning go wrong?

Comment by jacobjacob on Arguments about fast takeoff · 2018-03-07T21:09:34.491Z · score: 16 (4 votes) · LW · GW

I'm confused. Moore's law, GDP growth etc. are linear in log-space. That graph isn't. Why are these spaces treated as identical for the purposes of carrying the same important and confusing intuition? (E.g. I imagine one could find lots of weird growth functions that are linear in some space. Why are they all important?)

Comment by jacobjacob on Kenshō · 2018-01-28T19:15:04.920Z · score: 6 (2 votes) · LW · GW

Tim Ferris says some kind of meditation is one of the most common habits he finds in the people he interviews (regardless of whether it's actually listening to a Headspace episode, or a runner just repeating the very same song throughout the entire 1h run). E.g. this (haven't read, took me 5 sec of googling, but seems fine).

Also Ray Dalio says transcendental meditation is one of the key things that enabled him to cope emotionally with making mistakes and being wrong, and then building principles for never making the same kinds of mistakes again. He writes about that in Principles and talks about it here.

Comment by jacobjacob on A model I use when making plans to reduce AI x-risk · 2018-01-22T22:56:04.449Z · score: 17 (4 votes) · LW · GW

The mere fact that an x-risk hasn't occured is not evidence that it has been well managed, because that's the only possible state you could observe (if it wasn't true you wouldn't be around). Then again nuclear war is a GCR, so the anthropics might not be that bad.

On another note, if the nuclear situation is what it looks like when humanity "manages" an x-risk, I think we're in a pretty dire state...

Comment by jacobjacob on Field-Building and Deep Models · 2018-01-14T12:57:35.519Z · score: 9 (2 votes) · LW · GW

I now understand the key question as being "what baseline of inferential distance should we expect all orgs to have reached?". Should they all have internalised deep security mindset? Should they have read the Sequences? Or does it suffice that they've read Superintelligence? Or that they have a record of donating to charity? And so forth.

Ben seems to think this baseline is much higher than Albert. Which is why he is happy to support Paul's agenda, because it agrees on most of the non-mainstream moving parts that also go into MIRI's agenda, whereas orgs working on algorithmic bias, say, lack most of those. Now in order to settle the debate we can't really push Ben to explain why all the moving parts of his baseline are correct -- that's essentially the voice of Pat Modesto. He might legitimately be able to offer no better explanation than that it's the combined model built through years of thinking, reading, trying and discussing. But this also makes it difficult to settle the disagreement.

Comment by jacobjacob on Field-Building and Deep Models · 2018-01-13T22:56:50.422Z · score: 21 (6 votes) · LW · GW

What a great post! Very readable, concrete, and important. Is it fair to summarize it in the following way?

A market/population/portfolio of organizations solving a big problem must have two properties:

1) There must not be too much variance within the organizations.

This makes sure possible solutions are explored deeply enough. This is especially important if we expect the best solutions to seem bad.

2) There must not be too little variance among the organizations.

This makes sure possible solutions are explored widely enough. This is especially important if we expect the impact of solutions to be heavy-tailed.

Speculating a bit, evolution seems to do it this way. In order to move around there are wings, fin, legs and crawling bodies. But it's not like dog babies randomly get born with locomotive capacities selected form those, or mate with species having other capacities.

The final example you give, of top AI researchers trading models with people in the community, seem a great example of this. People build their own deep models, but occasionally bounce them off each other just to inject the right amount of additional variance.

Comment by jacobjacob on Comments on Power Law Distribution of Individual Impact · 2017-12-30T21:52:14.424Z · score: 13 (3 votes) · LW · GW

Hm this is an update... I'll have to think more about it. (The "added" section actually provided most of the force (~75%) behind my update. It's great that you provided causal reasons for your beliefs.)

Comment by jacobjacob on Comments on Power Law Distribution of Individual Impact · 2017-12-30T21:09:37.777Z · score: 4 (1 votes) · LW · GW

Have you considered/do you know more about RQ?

"Professor Stanovich and colleagues had large samples of subjects (usually several hundred) complete judgment tests like the Linda problem, as well as an I.Q. test. The major finding was that irrationality — or what Professor Stanovich called “dysrationalia” — correlates relatively weakly with I.Q.

[...]

Based on this evidence, Professor Stanovich and colleagues have introduced the concept of the rationality quotient, or R.Q. If an I.Q. test measures something like raw intellectual horsepower (abstract reasoning and verbal ability), a test of R.Q. would measure the propensity for reflective thought — stepping back from your own thinking and correcting its faulty tendencies.

There is also now evidence that rationality, unlike intelligence, can be improved through training. [...]"

https://www.nytimes.com/2016/09/18/opinion/sunday/the-difference-between-rationality-and-intelligence.html

Comment by jacobjacob on Comments on Power Law Distribution of Individual Impact · 2017-12-30T21:02:25.583Z · score: 4 (1 votes) · LW · GW

Really great discussion here, on an important and action-guiding question.

I'm confused about some of the discussion of predicting impact.

If we're dealing with a power-law, then most of the variance in impact comes from a handful of samples. So if you're using a metric like "contrarianness+conscientiuosness" that corresponds to an exceedingly rare trait, it might look like you're predictions are awful, because thousands of CEOs and career executives who are succesful by common standards lack that trait. However, as long as you get Musk and a handful others right, you will have correctly predicted most of the impact, despite missing most of the succesful people. What matters is not how many data-points you get right, but which ones.

Similarly, were it the case that one or two tail-end individuals (like Warren Buffett) score within 2 standard deviations on IQ, that would make IQ a substantially worse metric for predicting who will have the most impact. I haven't found any such individual, but I think doing so suffices to discredit some of the psychometric study conclusions as long as they didn't include that particular individual (which they likely didn't).

Comment by jacobjacob on Unjustified ideas comment thread · 2017-11-28T00:00:00.331Z · score: 15 (4 votes) · LW · GW

LW2.0 is trying to solve intellectual progress online. As Oli pointed out above, creativity and original thinking is a bottleneck. However, I believe there are much better ways of supporting this than having threads of the kind "let's just get together and be creative and throw out all our half-baked ideas".

Via metaphor. If we want more black swan startups, we must increase the variance in startups that get funded. If you had 1 million to try to achieve this, I think you'd be better off providing actual seed funding to just 2-3 projects that are actually creative, thereby changing norms and creating incentives for creativity; as opposed to funding regular meetings for random people to jot down and discuss whatever unfinished ideas are on their mind.

Hence, to support creativity on LW2.0, I think the right thing would be to strongly encourage and support people who spend effort developing contrarian arguments or exploring underexplored areas, and do so in a somewhat rigorous/serious manner, rather than just lower the signal-to-noise ratio and effort threshold of some posts and comments.

This view derives from models I find hard-to-verbalize given the time I have available to write this. Happy to double-crux though.

Comment by jacobjacob on Inadequacy and Modesty · 2017-11-09T21:18:47.735Z · score: 8 (2 votes) · LW · GW
inadequacy should be a function of how much work and money is being spent on a particular subject.

I strongly disagree. Society seems to have no problem squandering money on e.g. irreproducible subfields of psychology or ineffective charity.

Comment by jacobjacob on Bet Payoff 1: OpenPhil/MIRI Grant Increase · 2017-11-09T20:43:45.872Z · score: 9 (2 votes) · LW · GW

I agree with your conclusion, that the important takeaway is to build models of whom to trust when and on what matters.

Nonetheless, I disagree that it requires as much work to decide to trust the academic field of math as to trust MIRI. Whenever you're using the outside view, you need to define a reference class. I've never come across this used as an objection to the outside view. That's probably because there often is one such class more salient than others: "people who have social status within field X". After all, one of the key explanations for the evolution of human intelligence is that of an arms race in social cognition. For example, you see this in studies where people are clueless at solving logic problems, unless you phrase them in terms of detecting cheaters or other breaches of social contracts (see e.g. Cheng & Holyoak, 1985 and Gigerenzer & Hug, 1992). So we should expect humans to easily figure out who has status within a field, but to have a very hard time figuring out who gets the field closer to truth.

Isn't this exactly why modesty is such an appealing and powerful view in the first place? Because choosing the reference class is so easy (not requiring much object-level investigation), and experts are correct sufficiently often, that any inside view is mistaken in expectation.

Comment by jacobjacob on Bet Payoff 1: OpenPhil/MIRI Grant Increase · 2017-11-09T20:21:51.041Z · score: 4 (1 votes) · LW · GW

I think I understand this a bit better know, given also Rob's comment on FB.

On the theoretical level, that's a very interesting belief to have, because sometimes it doesn't pay rent in anticipated experience at all. Given that you cannot predict a change in direction, it seems rational to act as if your belief will not change, despite you being very confident it will change.

Your practical example is not a change of belief. It's rather saying "I now believe I'll increase funding to MIRI, but my credence is still <70% as the formal decision process usually uncovers many surprises"

Comment by jacobjacob on Bet Payoff 1: OpenPhil/MIRI Grant Increase · 2017-11-09T18:57:04.165Z · score: 9 (2 votes) · LW · GW
I don’t think it would be too surprising if that movement on my end continues.

I'm very confused about the notion of fitting expected updating within a Bayesian framework. Phenomena like the fact that a Bayesian agent should expect to never change any particular belief, although they might have high credence that they'll change some belief; or that a Bayesian agent can recognize a median belief change ≠ 0 but not a mean belief change ≠ 0.

The Copernican Revolution from the Inside

2017-11-01T10:51:50.127Z · score: 140 (66 votes)