Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-04-04T10:13:45.744Z · score: 2 (1 votes) · LW · GW
This is a prediction I make, with "general-seeming" replaced by "more general", and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.

I notice I'm confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).

Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-03-29T14:01:16.820Z · score: 12 (3 votes) · LW · GW

The following exchange is also relevant:

[-] Raiden 1y link 30

Robin, or anyone who agrees with Robin:

What evidence can you imagine would convince you that AGI would go FOOM?

Reply[-] jprwg 1y link 22

While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.

That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:

  • A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
  • Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn't say much.

Reply[-] RobinHanson 1y link 23

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-03-29T13:11:33.030Z · score: 6 (3 votes) · LW · GW

EY seems to have interpreted AlphaGo Zero as strong evidence for his view in the AI-foom debate, though Hanson disagrees.

EY:

Showing excellent narrow performance *using components that look general* is extremely suggestive [of a future system that can develop lots and lots of different "narrow" expertises, using general components].

Hanson:

It is only broad sets of skills that are suggestive. Being very good at specific tasks is great, but doesn't suggest much about what it will take to be good at a wide range of tasks. [...] The components look MORE general than the specific problem on which they are applied, but the question is: HOW general overall, relative to the standard of achieving human level abilities across a wide scope of tasks.

It's somewhat hard to hash this out as an absolute rather than conditional prediction (e.g. conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures).

Maybe EY would be more bullish on Starcraft without imitation learning, or AlphaFold with only 1 or 2 modules (rather than 4/5 or 8/9 depending on how you count).

Comment by jacobjacob on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T23:27:58.061Z · score: 4 (2 votes) · LW · GW

If people provided this as a service, they might be risk-averse (it might make sense for people to be risk-averse with their runway), which means you'd have to pay more than hourly rate/chance of winning.

This might not be a problem, as long as the market does the cool thing markets do: allowing you to find someone with a lower opportunity cost than you for doing something.

Comment by jacobjacob on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T23:24:18.055Z · score: 4 (4 votes) · LW · GW

I think the question, narrowly interpreted as "what would cause me to spend more time on the object-level answering questions on LW" doesn't capture most of the exciting things that happen when you build an economy around something. In particular, that suddenly makes various auxiliary work valuable. Examples:

  • Someone spending a year living off of one’s savings, learning how to summarise comment threads, with the expectation that people will pay well for this ability in the following years
  • A competent literature-reviewer gathering 5 friends to teach them the skill, in order to scale their reviewing capacity to earn more prize money
  • A college student building up a strong forecasting track-record and then being paid enough to do forecasting for a few hours each week that they can pursue their own projects instead of having to work full-time over the summer
  • A college student dropping out to work full-time on answering questions on LessWrong, expecting this to provide a stable funding stream for 2+ years
  • A professional with a stable job and family and a hard time making changes to their life-situation, taking 2 hours/week off from work to do skilled cost-effectiveness analyses, while being fairly compensated
  • Some people starting a “Prize VC” or “Prize market maker”, which attempts to find potential prize winners and connect them with prizes (or vice versa), while taking a cut somehow

I have an upcoming post where I describe in more detail what I think is required to make this work.

Comment by jacobjacob on Unconscious Economies · 2019-03-28T16:45:21.285Z · score: 4 (2 votes) · LW · GW

Thanks for pointing that out, the mention of YouTube might be misleading. Overall this should be read as a first-principles argument, rather than an empirical claim about YouTube in particular.

Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-03-28T16:11:50.551Z · score: 4 (2 votes) · LW · GW

Why are you measuring it in proportion of time-until-agent-AGI and not years? If it takes 2 years from comprehensive services to agent, and most jobs are automatable within 1.5 years, that seems a lot less striking and important than the claim pre-operationalisation.

Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-03-28T13:17:32.439Z · score: 7 (4 votes) · LW · GW

Wei_Dai writes:

A major problem in predicting CAIS safety is to understand the order in which various services are likely to arise, in particular whether risk-reducing services are likely to come before risk-increasing services. This seems to require a lot of work in delineating various kinds of services and how they depend on each other as well as on algorithmic advancements, conceptual insights, computing power, etc. (instead of treating them as largely interchangeable or thinking that safety-relevant services will be there when we need them). Since this analysis seems very hard to do much ahead of time, I think we'll have to put very wide error bars on any predictions of whether CAIS would be safe or unsafe, until very late in the game.
Comment by jacobjacob on What are CAIS' boldest near/medium-term predictions? · 2019-03-28T13:16:50.692Z · score: 7 (4 votes) · LW · GW

Ricraz writes:

I'm broadly sympathetic to the empirical claim that we'll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single superhuman AGI (one unified system that can do nearly all cognitive tasks as well as or better than any human).

I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.

He also adds:

One plausible mechanism is that deep learning continues to succeed on tasks where there's lots of training data, but doesn't learn how to reason in general ways - e.g. it could learn from court documents how to imitate lawyers well enough to replace them in most cases, without being able to understand law in the way humans do. Self-driving cars are another pertinent example. If that pattern repeats across most human professions, we might see massive societal shifts well before AI becomes dangerous in the adversarial way that’s usually discussed in the context of AI safety.

What are CAIS' boldest near/medium-term predictions?

2019-03-28T13:14:32.800Z · score: 32 (9 votes)
Comment by jacobjacob on Do you like bullet points? · 2019-03-26T23:42:18.258Z · score: 5 (3 votes) · LW · GW

Another data-point: I love bullet points and have been sad and confused about how little they're used in writing generally. In fact, when reading dense text, I often invest a few initial minutes in converting it to bullet points just in order to be able to read and understand it better.

Here's PG on a related topic, sharing some of his skepticism for when bullet points are not appropriate: http://paulgraham.com/nthings.html

Comment by jacobjacob on Understanding information cascades · 2019-03-25T22:03:23.949Z · score: 4 (1 votes) · LW · GW

One should be able to think quantitatively about that, eg how many questions do you need to ask until you find out whether your extremization hurt you. I'm surprised by the suggestion that GJP didn't do enough, unless their extremizations were frequently in the >90% range.

Comment by jacobjacob on Understanding information cascades · 2019-03-25T22:00:06.295Z · score: 4 (2 votes) · LW · GW

I did, he said a researcher mentioned it in conversation.

Comment by jacobjacob on Unconscious Economies · 2019-03-25T21:58:37.606Z · score: 3 (2 votes) · LW · GW

Good point, there's selection pressure for things which happen to try harder to be selected for ("click me! I'm a link!"), regardless of whether they are profitable. But this is not the only pressure, and depending on what happens to a thing when it is "selected" (viewed, interviewed, etc.) this pressure can be amplified (as in OP) or countered (as in Vaniver's comment).

Comment by jacobjacob on Understanding information cascades · 2019-03-22T15:10:05.785Z · score: 4 (2 votes) · LW · GW
more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke.

Do you have a link to this data?

Comment by jacobjacob on Understanding information cascades · 2019-03-14T15:45:50.640Z · score: 10 (3 votes) · LW · GW

I haven't looked through your links in much detail, but wanted to reply to this:

Overall I would suggest to approach this with some intellectual humility and study existing research more, rather then try to reinvent large part of network science on LessWrong. (My guess is something like >2000 research years were spent on the topic often by quite good people.)

I either disagree or am confused. It seems good to use resources to outsource your ability to do literature reviews, distillation or extrapolation, to someone with higher comparative advantage. If the LW question feature can enable that, it will make the market for intellectual progress more efficient; and I wanted to test whether this was so.

I am not trying to reinvent network science, and I'm not that interested in the large amount of theoretical work that has been done. I am trying to 1) apply these insights to very particular problems I face (relating to forecasting and more); and 2) think about this from a cost-effectiveness perspective.

I am very happy to trade money for my time in answering these questions.

(Neither 1) nor 2) seems like something I expect the existing literature to have been very interested in. I believe this for similar reasons to those Holden Karnofsky express here.)

Comment by jacobjacob on Understanding information cascades · 2019-03-14T15:20:16.372Z · score: 6 (4 votes) · LW · GW

Seems like a sensible worry, and we did consider some version of it. My reasoning was roughly:

1) The questions feature is quite new, and if it will be very valuable, most use-cases and the proper UI haven't been discovered yet (these can be hard to predict in advance without getting users to play around with different things and then talking to them).

No one has yet attempted to use multiple questions. So it would be valuable for the LW team and the community to experiment with that, despite possible countervailing considerations (any good experiment will have sufficient uncertainty that such considerations will always exist).

2) Questions 1/2, 3 and 4 are quite different, and it seems good to be able to do research on one sub-problem without taking mindshare from everyone working on any subproblem.

Formalising continuous info cascades? [Info-cascade series]

2019-03-13T10:55:46.133Z · score: 17 (4 votes)

How large is the harm from info-cascades? [Info-cascade series]

2019-03-13T10:55:38.872Z · score: 23 (4 votes)

How can we respond to info-cascades? [Info-cascade series]

2019-03-13T10:55:25.685Z · score: 15 (3 votes)

Distribution of info-cascades across fields? [Info-cascade series]

2019-03-13T10:55:17.194Z · score: 15 (3 votes)

Understanding information cascades

2019-03-13T10:55:05.932Z · score: 54 (18 votes)
Comment by jacobjacob on Understanding information cascades · 2019-03-13T10:46:06.236Z · score: 3 (2 votes) · LW · GW

See this post for a good, simple mathematical description of the discrete version of the phenomenon.

Comment by jacobjacob on How large is the harm from info-cascades? [Info-cascade series] · 2019-03-13T10:26:49.855Z · score: 12 (4 votes) · LW · GW

Me and Ben Pace (with some help from Niki Shams) made a Guesstimate model of how much information cascades is costing science in terms of wasted grant money. The model is largely based on the excellent paper “How citation distortions create unfounded authority: analysis of a citation network” (Greenberg, 2009), which traces how an uncertain claim in biomedicine is inflated to established knowledge over a period of 15 years, and used to justify ~$10 million in grant money from the NIH (we calculated the number ourselves here).

There are many open questions about some of the inputs to our model as well as how this generalises outside of academia (or even outside of biomedicine). However, we see this as a “Jellybaby” in Douglas Hubbard’s sense -- it’s a first data-point and stab at the problem which brings us from “no idea idea how big or small the costs of info-cascades are”, to at least “it is plausible though very uncertain that the costs can be on the order of magnitude of billions of dollars, yearly, in academic grant money”.

Comment by jacobjacob on How large is the harm from info-cascades? [Info-cascade series] · 2019-03-13T10:26:32.892Z · score: 7 (4 votes) · LW · GW

This might be an interesting pointer.

In Note-8 in the supplementary materials, Greenberg begins to quantify the problem. He defines an amplification measure for paper P as the number of citation-paths originating at P and terminating at all other papers, except for paths of length 1 flowing directly to primary data papers. The amplification density of a network is the mean amplification across its papers.

Greenberg then finds that, in the particular network analysed, you can achieve amplification density of about 1000 over a 15 year time-frame. This density grows exponentially with a doubling time of very roughly 2 years.

Comment by jacobjacob on Understanding information cascades · 2019-03-13T10:17:00.402Z · score: 18 (4 votes) · LW · GW

Here's a quick bibliography we threw together.

Background:

Previous LessWrong posts referring to info cascades:

And then here are all the LW posts we could find that used the concept (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) . Not sure how relevant they are, but might be useful in orienting around the concept.

Comment by jacobjacob on [deleted post] 2019-03-10T11:24:14.274Z

SPOILER WARNING

Schelling cafe? What Schelling cafe?

One you went to before, maybe?

And who on earth do you talk to? Maybe the guy who sat with us in a cab after EA Global Lonon last fall...

Comment by jacobjacob on Unconscious Economies · 2019-02-27T17:38:20.638Z · score: 17 (6 votes) · LW · GW

I found myself in a situation like: "if this is common knowledge within econ, writing an explanation would signal I'm not part of econ and hence my econ opinions are low status", but decided to go ahead anyway.

It's good you found it helpful. I'm wondering if equilibria like the above is a mechanism preventing important stuff from being distilled.

Comment by jacobjacob on Unconscious Economies · 2019-02-27T17:34:05.450Z · score: 8 (5 votes) · LW · GW

I really appreciate you citing that.

I should have made it clearer, but for reference, the works I've been exposed to:

  • Hal Varian's undergrad textbook

  • Marginal Revolution University

  • Some amount of listening to Econ Talk, reading Investopedia and Wikipedia articles

  • MSc degree at LSE

Unconscious Economies

2019-02-27T12:58:50.320Z · score: 68 (25 votes)
Comment by jacobjacob on Less Competition, More Meritocracy? · 2019-02-27T11:50:32.523Z · score: 3 (2 votes) · LW · GW

For section III. it would be really helpful to concretely work through what happens in the examples of divorce, nuclear war, government default, etc. What's a plausible thought process of the agents involved?

My current model is something like "my marriage is worse than I find tolerable, so I have nothing to loose. Now that divorce is legal, I might as well gamble my savings in the casino. If I win we could move to a better home and maybe save the relationship, if I lose we'll get divorced."

People who have nothing to lose start taking risks which fill up the merely possibly bad outcomes until they start mattering.

Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:37:09.695Z · score: 4 (3 votes) · LW · GW

In the broader economy, it's not the case that "If buying things reduced your income, people stop buying things, and eventually money stops flowing altogether".

So the only way that makes sense to me is if you model content as a public good which no user is incentivised to contribute to maintaining.

Speculatively, this might be avoided if votes were public: because then voting would be a costly signal of one's epistemic values or other things.

Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:32:22.489Z · score: 2 (2 votes) · LW · GW
though I'm not sure how that is calculateed from one's karma

I believe it's proportional to the log of your user karma. But I'm not sure.

One can get high karma from a small amount of content that a small number of sufficiently high karma users that double up vote it.

There is still an incentive gradient towards "least publishable units".

Suppose you have a piece of work worth 18 karma to high-karma user U. However, U's strong upvote is only worth 8 karma.

If you just post one piece of work, you get 8 karma. If you split your work into three pieces, each of which U values at 6 karma, you're better off. U might strong-upvote all of them (they'd rather allocate a little too much karma than way too little), and you get 24 karma.

To the extend the metaphor in the original question: maybe if the world economy ran on the equivalent of strong upvotes there would still be cars around, yet no one could buy airplanes.

Comment by jacobjacob on How important is it that LW has an unlimited supply of karma? · 2019-02-12T10:17:48.260Z · score: 2 (2 votes) · LW · GW

Do you have details on when and why that was removed? Or past posts discussing that system?

How important is it that LW has an unlimited supply of karma?

2019-02-11T01:41:51.797Z · score: 30 (12 votes)
Comment by jacobjacob on The Case for a Bigger Audience · 2019-02-10T00:21:08.240Z · score: 6 (4 votes) · LW · GW

I was going to say 2000 times sounded like way too much, but making the guesstimates that means on average using "common knowledge" once every other day since it was published, and "out to get you" once every third day, and that does seem consistent with my experience hanging out with you (though of course with a fat tail of the distribution, using some concepts like 10 times in a single long hangout).

Comment by jacobjacob on When should we expect the education bubble to pop? How can we short it? · 2019-02-10T00:06:57.591Z · score: 20 (9 votes) · LW · GW

Asset bubbles can be Nash equilibria for a while. This is a really important point. If surrounded by irrational agents, it might be rational to play along with the bubble instead of shorting and waiting. "The market can stay irrational longer than you can stay solvent."

For most of 2017, you shouldn't have shorted crypto, even if you knew it would eventually go down. The rising markets and the interest on your short would kill you. It might take big hedge funds with really deep liquidity to ride out the bubble, and even they might not be able to make it if they get in too early. In 2008 none of the investment banks could short things early enough because no one else was doing it.

The difference between genius (shorting at the peak) and really smart (shorting pre-peak) matters a lot in markets. (There's this scene in the Big Short where some guy covers the cost of his BBB shorts by buying a ton of AAA-rated stuff, assuming that at least those will keep rising.)

So shorting and buying are not symmetric (as you might treat them in a mathematical model, only differing by the sign on the quantity of assets bought). Shorting is much harder and much more dangerous.

In fact, my current model [1] is that this is the very reason financial markets can exhibit bubbles of "irrationality" despite all their beautiful properties of self-correction and efficiency.

[1] For transparency, I basically downloaded this model from davidmanheim.

When should we expect the education bubble to pop? How can we short it?

2019-02-09T21:39:10.918Z · score: 41 (12 votes)
Comment by jacobjacob on X-risks are a tragedies of the commons · 2019-02-09T21:00:03.333Z · score: 5 (3 votes) · LW · GW

In case others haven't seen it, here's a great little matrix summarising the classification of goods on "rivalry" and "excludability" axes.

Comment by jacobjacob on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-05T10:07:42.749Z · score: 3 (2 votes) · LW · GW

Hanson's speed-weighted voting reminds me a bit of quadratic voting.

Comment by jacobjacob on What are some of bizarre theories based on anthropic reasoning? · 2019-02-05T09:40:54.184Z · score: 3 (2 votes) · LW · GW

I presume that, unlike X-risk, s-risks don't remove the vast majority of observer moments.

Comment by jacobjacob on Announcement: AI alignment prize round 4 winners · 2019-01-22T16:56:42.368Z · score: 12 (5 votes) · LW · GW

I disagree with the view that it's bad to spend the first few months prizing top researchers who would have done the work anyway. This _in and of itself_ is cleary burning cash, yet the point is to change incentives over a longer time-frame.

If you think research output is heavy-tailed, what you should expect to observe is something like this happening for a while, until promising tail-end researchers realise there's a stable stream of value to be had here, and put in the effort required to level up and contribute themselves. It's not implausible to me that would take a >1 year of prizes.

Expecting lots of important counterfactual work, that beats the current best work, to be come out of the woodwork within ~6 months seems to assume that A) making progress on alignment is quite tractable, and B) the ability to do so is fairly widely distributed across people; both to a seemingly unjustified extent.

I personally think prizes should be announced together with precommitments to keep delivering them for a non-trivial amount of time. I believe this because I think changing incentives involves changing expectations, in a way that changes medium-term planning. I expect people to have qualitatively different thoughts if their S1 reliably believes that fleshing out the-kinds-of-thoughts-that-take-6-months-to-flesh-out will be reward after those 6 months.

That's expensive, in terms of both money and trust.

Comment by jacobjacob on What are good ML/AI related prediction / calibration questions for 2019? · 2019-01-16T17:56:20.563Z · score: 10 (3 votes) · LW · GW

elityre has done work on this for BERI, suggesting >30 questions.

Regarding the question metatype, Allan Dafoe has offered a set of desiderata in the appendix to his AI governance research agenda.

Comment by jacobjacob on Why is so much discussion happening in private Google Docs? · 2019-01-12T11:17:04.531Z · score: 14 (7 votes) · LW · GW

If true, sounds like a bug and not a feature of lw.

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-10T23:36:20.535Z · score: 3 (2 votes) · LW · GW

So: habryka did say "anyone" in the original description, and so he will pay both respondents who completed the bounty according to original specifications (which thereby excludes gjm). I will only pay Radamantis as I interpreted him as "claiming" the task with his original comment.

I suggest you pm with payment details.

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-10T23:10:03.202Z · score: 1 (1 votes) · LW · GW

I'll PM habryka about what to do with the bounty given that there were two respondents.

Overall I'm excited this data and analysis was generated and will sit down to take a look and update his weekend. :)

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-07T19:37:29.714Z · score: 2 (2 votes) · LW · GW

What's your "reasonable sounding metric" of success?

Comment by jacobjacob on What is a reasonable outside view for the fate of social movements? · 2019-01-07T02:44:56.319Z · score: 19 (6 votes) · LW · GW

I add $30 to the bounty.

There are 110 items in the list. So 25% is ~28.

I hereby set the random seed as whatever will be the last digit and first two decimals (3 digits total) of the S&P 500 Index price on January 7, 10am GMT-5, as found in the interactive chart by Googling "s&p500".

For example, the value of the seed on 10am January 4 was "797".

[I would have used the NIST public randomness beacon (v2.0) but it appears to be down due to government shutdown :( ].

Instructions for choosing the movements

Let the above-generated seed be n.

Using Python3.0:

import random
random.seed(n)
indices = sorted(random.sample([i for i in range(1,111)], 28))
Comment by jacobjacob on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T00:41:19.707Z · score: 3 (2 votes) · LW · GW

I'm confused: why doesn't variability cause any trouble in the standard models? It seems that if producers are risk-averse, it results in less production than otherwise.

What is a reasonable outside view for the fate of social movements?

2019-01-04T00:21:20.603Z · score: 36 (12 votes)
Comment by jacobjacob on Reinterpreting "AI and Compute" · 2018-12-27T16:17:15.242Z · score: 2 (2 votes) · LW · GW

I'm confused. Do you mean "worlds" as in "future trajectories of the world" or as in "subcommunities of AI researchers"? And what's a concrete example of gains from trade between worlds?

Comment by jacobjacob on Against Tulip Subsidies · 2018-12-20T05:10:52.381Z · score: 1 (1 votes) · LW · GW

The link to "many rigorous well-controlled studies" is broken.

List of previous prediction market projects

2018-10-22T00:45:01.425Z · score: 33 (9 votes)
Comment by jacobjacob on Some cruxes on impactful alternatives to AI policy work · 2018-10-13T21:49:30.228Z · score: 10 (3 votes) · LW · GW

Suppose your goal is not to maximise an objective, but just to cross some threshold. This is plausibly the situation with existential risk (e.g. "maximise probability of okay outcome"). Then, if you're above the threshold, you want to minimise variance, whereas if you're below it, you want to maximise variance. (See this for a simple example of this strategy applied to a game.) If Richard believes we are currently above the x-risk threshold and Ben believes we are below it, this might be a simple crux.

Comment by jacobjacob on Some cruxes on impactful alternatives to AI policy work · 2018-10-13T18:24:10.107Z · score: 17 (8 votes) · LW · GW
However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway).

I think this is less true for startups than for scientific discoveries, because of bad Nash equilibrium stemming from founder effects. The objective which Google is maximising might not be concave. It might have many peaks, and which you reach might be quite arbitrarily determined. Yet the peaks might have very different consequences when you have a billion users.

For lack of a concrete example... suppose a webapp W uses feature x, and this influences which audience uses the app. Then, once W has scaled and depend on that audience for substantial profit they can't easily change x. (It might be that changing x to y wouldn't decrease profit, but just not increase it.) Yet, had they initially used y instead of x, they could have grown just as big, but they would have had a different audience. Moreover, because of network effects and returns to scale, it might not be possible for a rivalling company to build their own webapp which is basically the same thing but with y instead.

Comment by jacobjacob on Four kinds of problems · 2018-09-20T18:30:10.125Z · score: 3 (2 votes) · LW · GW

I don't want to base my argument on that video. It's based on the intuitions for philosophy I developed doing my BA in it at Oxford. I expect to be able to find better examples, but don't have the energy to do that now. This should be read more as "I'm pointing at something that others who have done philosophy might also have experienced", rather than "I'm giving a rigorous defense of the claim that even people outside philosophy might appreciate".

Comment by jacobjacob on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-10T20:50:33.109Z · score: 1 (1 votes) · LW · GW

That still seems too vague to be useful. I don't have the slack to do the work of generating good examples myself at the moment.

Comment by jacobjacob on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-08T23:20:35.268Z · score: 16 (8 votes) · LW · GW

This would have been more readable if you gave concrete examples of each kind of problem. It seems like your claim might be a useful dichotomy, but in its current state it's likely not going to cause me to analyse problems differently or take different actions.

Comment by jacobjacob on Quick Thoughts on Generation and Evaluation of Hypotheses in a Community · 2018-09-07T12:25:50.197Z · score: 16 (6 votes) · LW · GW

Somewhat tangential, but...

You point to the following process:

Generation --> Evaluation --> Acceptance/rejection.

However, generation is often risky, and not everyone has the capacity to absorb that risk. For example, one might not have the exploratory space needed to pursue spontaneous 5-hour reading sprints while working full-time.

Hence, I think much of society looks like this:

Justification --> Evaluation --> Generation --> Evaluation --> Acceptance/rejection.

I think we some very important projects never happen because the people who have taken all the inferential steps necessary to understand them are not the same as those evaluating them, and so there's an information-asymmetry.

Here's PG:

That's one of California's hidden advantages: the mild climate means there's lots of marginal space. In cold places that margin gets trimmed off. There's a sharper line between outside and inside, and only projects that are officially sanctioned — by organizations, or parents, or wives, or at least by oneself — get proper indoor space. That raises the activation energy for new ideas. You can't just tinker. You have to justify.

(This is one of the problems I model impact certificates as trying to solve.)

Comment by jacobjacob on Musings on LessWrong Peer Review · 2018-09-06T22:45:15.597Z · score: 5 (3 votes) · LW · GW

Have you written any of the upcoming posts yet? If so, can you link them?

Comment by jacobjacob on History of the Development of Logical Induction · 2018-09-02T00:07:24.914Z · score: 5 (3 votes) · LW · GW

How causally important was Dutch-book theorems in suggesting to you that market behaviour could be used for logical induction? This seems like the most "non sequitur" part of the story. Suddenly, what seems like a surprising insight was just there.

I predict somewhere between "very" and "crucially" important.

Four kinds of problems

2018-08-21T23:01:51.339Z · score: 41 (19 votes)

Brains and backprop: a key timeline crux

2018-03-09T22:13:05.432Z · score: 88 (23 votes)

The Copernican Revolution from the Inside

2017-11-01T10:51:50.127Z · score: 140 (66 votes)