Is a near-term, self-sustaining Mars colony impossible? 2020-06-03T22:43:08.501Z
ESRogs's Shortform 2020-04-29T08:03:28.820Z
Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" 2020-01-03T00:33:09.994Z
'Longtermism' definitional discussion on EA Forum 2019-08-02T23:53:03.731Z
Henry Kissinger: AI Could Mean the End of Human History 2018-05-15T20:11:11.136Z
AskReddit: Hard Pills to Swallow 2018-05-14T11:20:37.470Z
Predicting Future Morality 2018-05-06T07:17:16.548Z
AI Safety via Debate 2018-05-05T02:11:25.655Z
FLI awards prize to Arkhipov’s relatives 2017-10-28T19:40:43.928Z
Functional Decision Theory: A New Theory of Instrumental Rationality 2017-10-20T08:09:25.645Z
A Software Agent Illustrating Some Features of an Illusionist Account of Consciousness 2017-10-17T07:42:28.822Z
Neuralink and the Brain’s Magical Future 2017-04-23T07:27:30.817Z
Request for help with economic analysis related to AI forecasting 2016-02-06T01:27:39.810Z
[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning 2016-01-27T21:04:55.183Z
[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours 2015-09-14T19:38:11.447Z
[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim 2015-08-19T06:37:21.049Z
[Link] Neural networks trained on expert Go games have just made a major leap 2015-01-02T15:48:16.283Z
[LINK] Attention Schema Theory of Consciousness 2013-08-25T22:30:01.903Z
[LINK] Well-written article on the Future of Humanity Institute and Existential Risk 2013-03-02T12:36:39.402Z
The Center for Sustainable Nanotechnology 2013-02-26T06:55:18.542Z


Comment by esrogs on How likely is it that SARS-CoV-2 originated in a laboratory? · 2021-01-26T00:25:54.735Z · LW · GW

Got it, thanks for the clarification.

Comment by esrogs on Grokking illusionism · 2021-01-26T00:24:55.768Z · LW · GW

Hmm, maybe it's worth distinguishing two things that "mental states" might mean:

  1. intermediate states in the process of executing some cognitive algorithm, which have some data associated with them
  2. phenomenological states of conscious experience

I guess you could believe that a p-zombie could have #1, but not #2.

Comment by esrogs on Grokking illusionism · 2021-01-26T00:16:57.218Z · LW · GW

Consciousness/subjective experience describes something that is fundamentally non-material.

More non-material than "love" or "three"?

It makes sense to me to think of "three" as being "real" in some sense independently from the existence of any collection of three physical objects, and in that sense having a non-material existence. (And maybe you could say the same thing for abstract concepts like "love".)

And also, three-ness is a pattern that collections of physical things might correspond to.

Do you think of consciousness as being non-material in a similar way? (Where the concept is not fundamentally a material thing, but you can identify it with collections of particles.)

Comment by esrogs on Grokking illusionism · 2021-01-26T00:02:01.158Z · LW · GW

If you just assume that there's no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.

How is this implausible at all? All kinds of totally real phenomena are emergent. There's no primitive for temperature, yet it emerges out of the motions of many particles. There's no primitive for wheel, but round things that roll still exist.

Maybe I've misunderstood your point though?

Comment by esrogs on Grokking illusionism · 2021-01-25T23:52:49.172Z · LW · GW

This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.

Is this a well-known phenomenon? I think I've observed this dynamic before and found it very frustrating. It seems like philosophers keep executing the following procedure:

  1. Take a sensible, but perhaps vague, everyday concept (e.g. consciousness, or free will), and give it a precise philosophical definition, but bake in some dubious, anti-reductionist assumptions into the definition.
  2. Discuss the concept in ways that conflate the everyday concept and the precise philosophical one. (Failing to make clear that the philosophical concept may or may not be the best formalization of the folk concept.)
  3. Realize that the anti-reductionist assumptions were false.
  4. Claim that the everyday concept is an illusion.
  5. Generate confusion (along with full employment for philosophers?).

If you'd just said that the precisely defined philosophical concept was a provisional formalization of the everyday concept in the first place, then you wouldn't have to claim that the everyday concept was an illusion once you realize that your formalization was wrong!

Comment by esrogs on Grokking illusionism · 2021-01-25T23:32:10.900Z · LW · GW

No one ever thought that phenomenal zombies lacked introspective access to their own mental states

I'm surprised by this. I thought p-zombies were thought not to have mental states.

I thought the idea was that they replicated human input-output behavior while having "no one home". Which sounds to me like not having mental states.

If they actually have mental states, then what separates them from the rest of us?

Comment by esrogs on How likely is it that SARS-CoV-2 originated in a laboratory? · 2021-01-25T22:09:19.474Z · LW · GW

This may be a bit of a pedantic comment, but I'm a bit confused by how your comment starts:

I've done over 200 hours of research on this topic and have read basically all the sources the article cites. That said, I don't agree with all of the claims.

The "That said, ..." part seems to imply that what follows is surprising. As though the reader expects you to agree with all the claims. But isn't the default presumption that, if you've done a whole bunch of research into some controversial question, that the evidence is mixed?

In other words, when I hear, "I've done over 200 hours of research ... and have read ... all the sources", I think, "Of course you don't agree with all the claims!" And it kind of throws me off that you seem to expect your readers to think that you would agree with all the claims.

Is the presumption that someone would only spend a whole bunch of hours researching these claims if they thought they were highly likely to be true? Or that only an uncritical, conspiracy theory true believer would put in so much time into looking into it?

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2021-01-21T23:21:09.778Z · LW · GW

I used SPX Dec '22, 2700/3000 (S&P was closer to those prices when I entered the position). And smart routing I think. Whatever the default is. I didn't manually choose an exchange.

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2021-01-21T17:01:46.080Z · LW · GW

I've been able to get closer to 0.6% on IB. I've done that by entering the order at a favorable price and then manually adjusting it by a small amount once a day until it gets filled. There's probably a better way to do it, but that's what's worked for me.

Comment by esrogs on Coherent decisions imply consistent utilities · 2021-01-14T21:33:42.180Z · LW · GW

That makes a lot of sense to me. Good points!

Comment by esrogs on Coherent decisions imply consistent utilities · 2021-01-13T20:13:53.944Z · LW · GW

It seems to me that there has been enough unanswered criticism of the implications of coherence theorems for making predictions about AGI that it would be quite misleading to include this post in the 2019 review.

If the post is the best articulation of a line of reasoning that has been influential in people's thinking about alignment, then even if there are strong arguments against it, I don't see why that means the post is not significant, at least from a historical perspective.

By analogy, I think Searle's Chinese Room argument is wrong and misleading, but I wouldn't argue that it shouldn't be included in a list of important works on philosophy of mind.

Would you (assuming you disagreed with it)? If not, what's the difference here?

(Put another way, I wouldn't think of the review as a collection of "correct" posts, but rather as a collection of posts that were important contributions to our thinking. To me this certainly qualifies as that.)

Comment by esrogs on Coherent decisions imply consistent utilities · 2021-01-13T20:04:10.484Z · LW · GW

On the review: I don't think this post should be in the Alignment section of the review, without a significant rewrite / addition clarifying why exactly coherence arguments are useful or important for AI alignment.

Assuming that one accepts the arguments against coherence arguments being important for alignment (as I tentatively do), I don't see why that means this shouldn't be included in the Alignment section.

The motivation for this post was its relevance to alignment. People think about it in the context of alignment. If subsequent arguments indicate that it's misguided, I don't see why that means it shouldn't be considered (from a historical perspective) to have been in the alignment stream of work (along with the arguments against it).

(Though, I suppose if there's another category that seems like a more exact match, that seems like a fine reason to put it in that section rather than the Alignment section.)

Does that make sense? Is your concern that people will see this in the Alignment section, and not see the arguments against the connection, and continue to be misled?

Comment by esrogs on ESRogs's Shortform · 2021-01-13T19:33:00.759Z · LW · GW

The workflow I've imagined is something like:

  1. human specifies function in English
  2. AI generates several candidate code functions
  3. AI generates test cases for its candidate functions, and computes their results
  4. AI formally analyzes its candidate functions and looks for simple interesting guarantees it can make about their behavior
  5. AI displays its candidate functions to the user, along with a summary of the test results and any guarantees about the input output behavior, and the user selects the one they want (which they can also edit, as necessary)

In this version, you go straight from English to code, which I think might be easier than from English to formal specification, because we have lots of examples of code with comments. (And I've seen demos of GPT-3 doing it for simple functions.)

I think some (actually useful) version of the above is probably within reach today, or in the very near future.

Comment by esrogs on ESRogs's Shortform · 2021-01-13T18:38:55.948Z · LW · GW

Mostly it just seems significant in the grand scheme of things. Our mathematics is going to become formally verified.

In terms of actual consequences, it's maybe not so important on its own. But putting a couple pieces together (this, Dan Selsam's work, GPT), it seems like we're going to get much better AI-driven automated theorem proving, formal verification, code generation, etc relatively soon.

I'd expect these things to start meaningfully changing how we do programming sometime in the next decade.

Comment by esrogs on ESRogs's Shortform · 2021-01-13T07:04:22.530Z · LW · GW

One of the most important things going on right now, that people aren't paying attention to: Kevin Buzzard is (with others) formalizing the entire undergraduate mathematics curriculum in Lean. (So that all the proofs will be formally verified.)

See one of his talks here: 

Comment by esrogs on Imitative Generalisation (AKA 'Learning the Prior') · 2021-01-13T00:24:19.010Z · LW · GW

FYI it looks like the footnote links are broken. (Linking to "about:blank...")

Comment by esrogs on Science in a High-Dimensional World · 2021-01-12T23:08:24.784Z · LW · GW

Comment by esrogs on Science in a High-Dimensional World · 2021-01-12T23:07:40.382Z · LW · GW

I'm not sure whether it's the standard view in physics, but Sean Carroll has suggested that we should think of locality in space as deriving from entanglement. (With space itself as basically an emergent phenomenon.) And I believe he considers this a driving principle in his quantum gravity work.

Comment by esrogs on Fourth Wave Covid Toy Modeling · 2021-01-10T08:38:55.516Z · LW · GW

Based on what you've said, Rt never goes below one

You're saying nostalgebraist says Rt never goes below 1?

I interpreted "R is always ~1 with noise/oscillations" to mean that it could go below 1 temporarily. And that seems consistent with the current London data. No?

Comment by esrogs on Fourth Wave Covid Toy Modeling · 2021-01-08T05:29:50.658Z · LW · GW

So you're saying that you think that a more infectious virus will not increase infections by as high a percentage of otherwise expected infections under conditions with more precautions, versus conditions with less precautions? What's the physical mechanism there?

Wouldn't "the fractal nature of risk taking" cause this? If some people are taking lots of risk, but they comply with actually strict lockdowns, then those lockdowns would work better than might otherwise be expected. No?

Comment by esrogs on ESRogs's Shortform · 2021-01-02T03:19:17.843Z · LW · GW

See also his recent paper, which seems to like an important contribution towards using ML for symbolic / logical reasoning: Universal Policies for Software-Defined MDPs.

Comment by esrogs on ESRogs's Shortform · 2021-01-02T03:16:08.402Z · LW · GW

If I've heard him right, it sounds like Dan Selsam (NeuroSAT, IMO Grand Challenge) thinks ML systems will be able to solve IMO geometry problems (though not other kinds of problems) by the next IMO.

(See comments starting around 38:56.)


Comment by esrogs on SpaceX will have massive impact in the next decade · 2021-01-01T04:01:12.698Z · LW · GW

Sounds like you're thinking along the same lines as I was.

Comment by esrogs on SpaceX will have massive impact in the next decade · 2020-12-31T23:54:23.665Z · LW · GW

When Tungsten rods are dropped from space onto earth they manage to store a lot of kinetic energy because they have a very high boiling point. Dropping tungsten rods from space can release as much energy as nuclear weapons without the nuclear fallout.

Doesn't that energy ultimately come from the propellant used to get the rods to orbit? Wouldn't it be more cost effective to just use the propellant itself as the explosive?

Is the advantage of the rod that it's easier to get it to the target than it would be to get the propellant there?

Comment by esrogs on My Model of the New COVID Strain and US Response · 2020-12-27T18:34:55.557Z · LW · GW

rtnew=1.7 is an entirely different case. Suppressing it would require the sort of lockdown that would yield rt=0.6 for the old strain, a number that has never been reached by any US state for any amount of time. I see no way in hell that Americans would agree to a lockdown much stricter than any we’ve had so far, especially after they’ve been promised that the worst is behind them.

As mentioned on Twitter, I don't buy this. I think we'd get more infections and deaths, but once hospitals are overwhelmed, society's negative feedback loop will kick in and we'll get R back close to 1.

I believe that lots of individuals could be a lot more cautious than they already are, and I don't think people will stand for hospitals being overwhelmed.

Comment by esrogs on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T17:43:09.095Z · LW · GW

This is commonly said on the basis of his $1b pledge

Wasn't it supposed to be a total of $1b pledged, from a variety of sources, including Reid Hoffman and Peter Thiel, rather than $1b just from Musk?

EDIT: yes, it was.

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

Comment by esrogs on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T17:41:43.378Z · LW · GW

the only entities that are listed as contributing money or loans are Sam Altman, Y Combinator Research, and OpenAI LP

Possible that he funded OpenAI LP? Or was that only created later, and funded by Microsoft and other non-founding investors?

Comment by esrogs on Extrapolating GPT-N performance · 2020-12-20T23:19:10.403Z · LW · GW

Ah, gotcha. Thanks!

Comment by esrogs on Homogeneity vs. heterogeneity in AI takeoff scenarios · 2020-12-20T19:14:02.326Z · LW · GW

Key point for those who don't click through (that I didn't realize at first) -- both types turned out to work and were in fact used. The gun-type "Little Boy" was dropped on Hiroshima, and the implosion-type "Fat Man" was dropped on Nagasaki.

Comment by esrogs on Homogeneity vs. heterogeneity in AI takeoff scenarios · 2020-12-20T18:59:40.700Z · LW · GW

For those organizations that do choose to compete... I think it is highly likely that they will attempt to build competing systems in basically the exact same way as the first organization did


It's unlikely for there to exist both aligned and misaligned AI systems at the same time

If the first group sunk some cost into aligning their system, but that wasn't integral to its everyday task performance, wouldn't a second competing group be somewhat likely to skimp on the alignment part?

It seems like this calls into the question the claim that we wouldn't get a mix of aligned and misaligned systems.

Do you expect it to be difficult to disentangle the alignment from the training, such that the path of least resistance for the second group will necessarily include doing a similar amount of alignment?

Comment by esrogs on Extrapolating GPT-N performance · 2020-12-20T17:57:44.930Z · LW · GW

Thanks. Still not sure I understand though:

> It's just a round-about way of saying that the upper end of s-curves (on a linear-log scale) eventually look roughly like power laws (on a linear-linear scale).

Doesn't the upper end of an s-curve plateau to an asymptote (on any scale), which a power law does not (on any scale)?

Comment by esrogs on Extrapolating GPT-N performance · 2020-12-20T07:47:55.131Z · LW · GW

Note that, if the network converges towards the irreducible error like a negative exponential (on a plot with reducible error on the y-axis), it would be a straight line on a plot with the logarithm of the reducible error on the y-axis.

Was a little confused by this note. This does not apply to any of the graphs in the post, right? (Since you plot the straight reducible error on the y-axis, and not its logarithm, as I understand.)

Comment by esrogs on Why quantitative methods are heartwarming · 2020-12-16T05:46:12.068Z · LW · GW

Thank you for writing this! I especially resonate with the unsullied victory part.

Comment by esrogs on What are the best precedents for industries failing to invest in valuable AI research? · 2020-12-15T15:14:58.308Z · LW · GW

Car companies have done too little too late to switch to making EVs.

See also: The Innovator's Dilemma.

Comment by esrogs on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-12-13T07:17:51.379Z · LW · GW

Oh God, a few quick points:

I don't like this way of beginning comments. Just make your substantive criticism without the expression of exasperation.

(I'm having a little trouble putting into words exactly why this bothers me. Is it wrong to feel exasperated? No. Is it wrong to express it? No. But this particular way of expressing it feels impolite and uncharitable to me.

I think saying, "I think this is wrong, and I'm frustrated that people make this kind of mistake", or something like that, would be okay -- you're expressing your frustration, and you're owning it.

But squeezing that all into just, "Oh God" feels too flippant and dismissive. It feels like saying, "O geez, I can't believe you've written something so dumb. Let me correct you..." which is just not how I want us to talk to each other on this forum.)

Comment by esrogs on Are index funds still a good investment? · 2020-12-06T18:45:54.037Z · LW · GW

if we don't get it until 2040, then I'd be mildly surprised if any of them will be close enough to the cutting edge to be major players

Haven't done any rigorous analysis, but it's my impression that the field of tech giants has stabilized a bit in the last couple of decades. In that, once you become a tech giant now, you don't drop out anymore. (Compare Microsoft's trajectory post-Google to IBM's post-Microsoft.)

I expect there to also be new tech giants by 2040. But I'd be surprised if all the current trillion dollar market cap companies are irrelevant by then.

That's thinking about them as tech companies, specifically. Rather than evaluating their current AI plays. But I think the fact that the tech cos have so much engineering talent means they're not likely to just totally miss the AI trend.

Comment by esrogs on Are index funds still a good investment? · 2020-12-04T22:53:02.238Z · LW · GW

Something like half of the companies that look overvalued do not look like they'll benefit much from AI. They look more like they were chosen for safety against risks such as recessions and other near-term risks. I'm thinking of companies such as Apple, Chipotle, Home Depot, Lululemon, Mastercard, and Guidewire.

This is a good point. Perhaps it's worth shorting (or underweighting) specifically these companies.

The "next few decades" is too long a time. Try evaluating what would have happened if you'd invested in companies in 2008 that looked like they would benefit from this decade's demand for electric vehicles, robocars, or solar.

Surely Tesla would have been on your list, either then or shortly thereafter (they IPO'd in 2010). And that would have been a good bet.

(I thought about buying in at the IPO, but figured I didn't know anything the rest of the market didn't. Then when they finally came out with the Model S and it was winning "Car of the Year" awards, and the stock price barely moved, I couldn't take it anymore and bought in in Jan 2013. I'm glad I did.)

This suggests that expecting an industrial revolution-level change after 2030 is a poor reason for choosing index funds that are loaded with high p/e stocks.

Note that industrial revolution-level change is more significant than the transition to electric cars. Maybe it's not more significant than a transition to nanomachines? Unfortunately I wasn't old enough in 1986 to be able to say what the evidence looked like then, and how that compares to the evidence for pending transformative AI today. All I can say is that it looks to me like AI is coming. And I'd be quite surprised if none of Google, OpenAI, Microsoft, Facebook, Amazon, Tesla ends up being a major player.

Maybe that means I should go long some of those stocks (besides Tesla, which I'm already quite long), and short the rest of the S&P 500.

Comment by esrogs on Are index funds still a good investment? · 2020-12-04T02:10:12.402Z · LW · GW

I like Colby's article. And in general I put significant weight on these kinds of arguments. However, I also put significant weight on the hypothesis that Industrial Revolution-level economic change and growth is coming sometime in the next few decades.

And it seems like the very companies that appear overvalued by the metrics Colby is looking at would be the most likely (existing) ones to capture a disproportionate share of the returns from AI.

If there's some way to square the two views that suggests something other than just buying the market (indexing), I'd be interested to hear it.

Comment by esrogs on Book review: WEIRDest People · 2020-12-03T05:47:29.606Z · LW · GW

Hmm, I would have assumed that gentile Christians just never followed the practice (just like they didn't keep kosher or follow other Old Testament laws), and (fairly naturally) saw it as conflicting with monogamy.

Am I mistaken -- was levirate marriage actually ever widely practiced in the Western Christian church?

Comment by esrogs on Book review: WEIRDest People · 2020-12-01T22:48:37.583Z · LW · GW

E.g. Deuteronomy 25:5-10 clearly describes levirate marriage as an obligation. How does unplanned exploration of cultural variation get from there to declaring levirate marriage a sin, while still treating the bible as the word of God?

Why is this more of a problem for unplanned exploration than for purposeful change?

Comment by esrogs on ESRogs's Shortform · 2020-12-01T21:48:03.855Z · LW · GW

Stripe is apparently tradable on Forge. The minimum size is $1MM though. Considering trying to put a group together...

Comment by esrogs on ESRogs's Shortform · 2020-12-01T21:37:49.841Z · LW · GW

Some that are hard to get into: Stripe, OpenAI, and SpaceX.

EDIT: Also cautiously increasing my stake in Arcimoto (FUV).

Comment by esrogs on Deconstructing 321 Voting · 2020-11-29T22:51:35.512Z · LW · GW

If the trade-off were only ever exhibited by voting methods what are worse than score voting, then it would in some sense not be a real trade-off.

Ah, gotcha. Thanks!

Comment by esrogs on How can I bet on short timelines? · 2020-11-29T14:59:15.426Z · LW · GW

What if it's a very slow burn after the point of no return? Presumably you'd still want to live your life and spend on yourself and loved ones (and even altruistically, on more short-term causes). No?

Comment by esrogs on How can I bet on short timelines? · 2020-11-29T13:50:41.299Z · LW · GW

Money is only valuable to me prior to the point of no return

Because money is only useful to you for preventing us from reaching that point of no return?

Comment by esrogs on Deconstructing 321 Voting · 2020-11-29T04:08:20.710Z · LW · GW

Approval and score voting are both 100% clone immune and 100% center-squeeze immune. Score voting is literally as good as you can get (at least in VSE terms!) if there's no strategic voting, and devolves into approval voting under strategic voting. So, if there is some kind of trade-off between center squeeze and clone problems, it must be in the territory of methods that are better than approval voting (even given strategic voters).

I don't understand the conclusion here. Score and approval don't exhibit the trade-off. And some other methods do. But what do you mean about it specifically being in the territory of methods that are better than approval that the trade-off exists?

Comment by esrogs on Sunzi's《Methods of War》- Introduction · 2020-11-19T07:42:17.395Z · LW · GW


These things determine victory and defeat.

Is this translation right? Looks like the original was much longer.

Comment by esrogs on Alex Ray's Shortform · 2020-11-08T20:47:27.307Z · LW · GW

COVID has increased many states costs', for reasons I can go into later, so it seems reasonable to think we're much closer to a tipping point than we were last year.

As much as I would like to work to make the situation better I don't know what to do.  In the meantime I'm left thinking about how to "bet my beliefs" and how one could stake a position against Illinois.

Is the COVID tipping point consideration making you think that the bonds are actually even worse than the "low quality" rating suggests? (Presumably the low ratings are already baked into the bond prices.)

Comment by esrogs on Generalize Kelly to Account for # Iterations? · 2020-11-02T23:08:37.799Z · LW · GW

Several prediction markets have recently offered a bet at around 62¢ which superforecasters assign around 85% probability. This resulted in a rare temptation for me to Kelly bet. Calculating the Kelly formula, I found that I was supposed to put 60% of my bankroll on this.

Is this assuming you take the 85% number directly as your credence?

Comment by esrogs on Biextensional Equivalence · 2020-10-28T21:50:01.490Z · LW · GW

This is helpful. Thanks!