Posts

Status as a Service (Done Quick) 2021-01-16T23:25:13.677Z
Reward large contributions? 2020-10-21T16:26:31.213Z
What do drafts look like? 2020-06-15T16:02:16.585Z
Visual Mental Imagery Training 2013-02-19T22:21:28.672Z
Taking into account another's preferences 2012-12-10T05:06:00.819Z
Cooperative Surplus Splitting 2012-09-19T23:56:52.149Z
Meetup : Board Games "Seattle" 2012-08-10T02:55:00.199Z
Meetup : Queueing and More 2012-03-07T20:44:36.693Z
Meetup : Seattle Board Games 2012-01-05T21:03:59.547Z
Meetup : Seattle: Intro to Bayes' Theorem 2011-09-24T21:15:02.606Z
Anyone else work at Microsoft? 2011-06-08T19:41:03.314Z

Comments

Comment by GuySrinivasan on D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset · 2021-09-10T03:11:39.727Z · LW · GW

Both IMO. It takes guts and grit to create and clean up something like this and you did it.

Comment by GuySrinivasan on D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset · 2021-09-09T19:59:27.513Z · LW · GW

https://docs.google.com/spreadsheets/d/1EDanxmrtgunqLq7BclqEOt79Ce7uJpmGULvZICYVdC8/edit?usp=sharing

Like this? If people viewing these comments don't have a better plan, I'll PM the link to everyone I see on past D&D.Sci posts. Feel free to edit however you'd like.

Comment by GuySrinivasan on D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset · 2021-09-09T15:50:37.968Z · LW · GW

I tried to make this scenario more involved than past D&D.Sci scenarios, and to ensure that there were multiple things you could figure out rather than just one way to succeed.  Given that this challenge ended up with noticeably fewer answers than previous ones, it looks like I went a bit too far with the complexity.  Interested in feedback on this.  What level of complexity do you want to see?  Were you planning to do this challenge until you got scared off/gave up?  What did you like and what did you dislike about this scenario?

 

I have participated in (almost?) all past D&D.Sci scenarios, but not this one. Here are the reasons:

  • complexity - it was very obvious upfront that spending "a little" time would not feel satisfying to me (whether that's true or not I don't know). In particular I guessed that while there were plenty of patterns to find, the true satisfying thing (again, for me) here would be tackling the survivorship bias, which didn't seem very tractable without lots of prior work
  • activation energy - I didn't really reach a point at any time when I actually felt motivated to go get the data into a nicely analyzable state. Contributors: 2D data, hex vs square, routes. (some things were nicely analyzable from the start; I didn't want to spend time unless I knew I could look at everything though)
  • other commitments - I said I'd write my own, so I've already got D&D.Sci [stuff] going on
  • other other commitments - happened to have a lot going on, exacerbating all of the above

My ideal complexity is probably just barely above the average past complexity. A note: I have written many puzzles for many puzzle events, and seen others write, and an extremely common mistake is to add complexity until it feels about right for you, but since you're the one writing the puzzle and have internalized a lot of the inferences and approaches already, you misjudge how complex it actually is and actually feels to people not already involved with the puzzle. And unless you're careful about getting feedback, you might not even notice, because some people will engage and get past the activation energy hump and love it and tell you it was great. Survivorship bias. :) These aren't puzzles, but they feel to me a lot like "data analysis puzzles".

All that said, thank you for making this! It looks like several people did engage fairly deeply, and if not for commitments I might well have been among them! I enjoyed most of your choices, after reading the solution, and those that I didn't, I think I'd root-cause to "adds more complexity". And reading yours caused me to finalize a decision I was making for my own in favor of the way that makes the data easier to "just start using".

Comment by GuySrinivasan on Training My Friend to Cook · 2021-08-29T22:07:31.184Z · LW · GW

I'm a pretty good cook who has sauteed and enjoyed asparagus. :D

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T03:28:12.284Z · LW · GW

(That also means, of course, that when I say I choose 3/4 and 1/2 and then 2/3, I am smuggling in information; implicitly assuming the reward structure for "getting the right answer". If I'd rather be right all the time if I'm the original and don't care at all if I'm the clone then I can answer "probability 1.0 that I'm the original!" and make out like a bandit. In that sense, yes, all probabilities are meaningless, not just self-locating ones, until you know what decisions are being made based on them.)

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T03:11:30.329Z · LW · GW

I think the comments on https://www.lesswrong.com/posts/YyJ8roBHhD3BhdXWn/probability-is-a-model-frequency-is-an-observation-why-both are pretty good, btw. They really showcase how all the hand-waving goes away as soon as you specify the decisions the original/clone will be making based on their degree of belief that they're the original.

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T03:03:09.445Z · LW · GW

Probability is the measure of your uncertainty.

If the procedure is to flip a coin, clone me on tails but not on heads, separate the original and the clone if needed, then let me (or us) wake up, then when I wake up I will think the probability that I am the original is 3/4 and that the probability the coin landed on heads is 1/2. If I am then informed that I am the original, I will think that the probability the coin landed on heads is 2/3.

But that can't be right. For this experiment, the Original and the Clone do not have to be waked up at the same time. The mad scientist could wake up the Original first. In fact, the coin can be tossed after it. For dramatic effect, after telling you you are the original, the mad scientist can give you the coin and let you toss it. It seems absurd to say the probability for Heads is anything but 1/2. Why is does the probability for Heads remains unchanged after learning you are the Original?

I don't understand the objection. Yes, I say 1/2, and yes, anything but that seems absurd. This coinflip isn't correlated with whether I was cloned, why should its probability depend on whether I am the original or the clone? In the first situation I believe the pre-experiment coinflip has 50% probability of having landed heads, then after learning some information positively correlated with the actual result being heads, I update to 67% probability of the coin having landed heads. In the dramatic situation I believe the post-experiment coinflip has 50% chance of heads and never learn any information correlated with the result. Zero contradiction here.

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-24T03:01:47.848Z · LW · GW

examples with solid numbers and bets

Well, yes, sorry, for the snark, but... obviously! If you know how to make it concrete with numbers instead of wishy-washy with words, please do so!

Comment by GuySrinivasan on D&D.Sci August 2021 Evaluation and Ruleset · 2021-08-24T00:46:05.243Z · LW · GW

I have an idea and commit to implementing it in September.

Comment by GuySrinivasan on D&D.Sci August 2021 Evaluation and Ruleset · 2021-08-24T00:44:04.718Z · LW · GW

Thank you for all your efforts so far! These have been enjoyable and instructive. As expected, I enjoyed this one. :D

I was a little worried about Doom, but not enough, it seems. And I think it was entirely fair to make Doom work as it does. The thematic naming of the rest is a plenty suggestive clue.

Let me jot down what I remember learning:

  • the autocorrelation coefficient of random walks tend to hover around 0.42
  • it's hard to get large coefficients accidentally, so tossing tons of series together and asking for any pairs that see nontrivial correlation is a decent discovery method
  • once again, if fitting parameters to a sum of two functions both of which have a constant term, remove one of the terms so you don't get confused by huge opposite values :D
  • once again, pairwise scatterplots are super helpful for just eyeballing obvious patterns
  • sums of sines overfit just as well as polynomials, in their domain. I knew this only in my head, not viscerally until now
Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-23T20:32:34.529Z · LW · GW

(to be clear, everything I've said also flows from the principle of indifference; if you cannot tell the difference between N states of the world, then the probability 1/N describes your uncertainty about those N states)

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-23T20:19:46.534Z · LW · GW

Okay, let me try again, then.

I am undergoing this experiment, repeatedly. The first time I do, there will be two people, both of whom remember prepping for this experiment, both of whom may ask "what is the probability I am the Original?" afterwards, one of whom will unknowingly be the Original, one of whom will unknowingly be the Clone. Subjectively, perhaps I was the Original; in that case if I ask "what is the probability I am the Original?" ... okay I'm already stuck. What's wrong with saying "50%"? Sure, there is a fact of the matter, but I don't know what it is. In my ignorance, why should I have a problem with saying 50% I'm the original? Certainly if I bet with someone who can find out the true answer I'm going to think my expectation is 0 at 1:1 odds.

But you say that's meaningless. Fine, let's go with it. We repeat the experiment. I will focus on "I". There are 2^n people, but each of them only has the subjective first-person perspective of themselves (and is instructed to ignore the obvious fact that there are another 2^n-1 people in their situation out there, because somehow that's not "first person" info?? okay). So anyway, there's just me now, after n experiments. A thought pops up in my head: "am I the Original?" and ... well, and I immediately think there's about a 1/2^n chance I'm the Original, and there's a 50% chance I'm the first Clone plus n-1 experiments, and there's a 25% chance I'm the first Clone of the first Clone plus n-2 experiments and a 25% chance I'm the second Clone of the Original plus n-2 experiments and etc.

I have no idea what you mean by "For your experience, the relative proportion of "I am the Original" has no reason to converge to any value as the iteration increases." Of course it does. It converges to 0% as the number of experiments increases, and it equals 1/2^n at every stage. Why wouldn't it? You keep saying it doesn't but your justification is always "in first-person perspective things are different" but as far as I can see they're not different at all.

Maybe you object to me thinking there are 2^n-1 others around? I'm fine with changing the experiment to randomly kill off one of the two after any experiments so that there's always only one around. Doesn't change my first-person perspective answers in the slightest. Still a 1/2^n chance my history was [not-cloned, not-cloned, not-cloned, ...] and a 1/2^n chance my history was [cloned, not-cloned, not-cloned, ...] and a 1/2^n chance my history was [not-cloned, cloned, not-cloned, ...] and a ... etc.

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-23T14:23:31.545Z · LW · GW

:facepalm: I simplified too much, thank you. The second phrasing is what I meant; "it's a summary of knowledge, not a causal mechanism". The first should have illustrated what breaks when substituting a summary for the mechanism, which does require something other than just looking at the summaries with nothing else changed. :D

I guess, let me try to recreate what made me write [the above] and see if I can communicate better.

I think what's going on is that dadadarren is saying to repeat the experiment. We begin with one person, the Original. Then we split, then split each, then split each again, etc. Now we have 2^n people, with histories [Original, Original, ..., Original] and [Original, Original, ..., Clone] and etc. There will be n*(n-1)/2 people who have participated in the experiment n times and been the Original n/2 times, they have subjectively seen that they came out the Original 50% of the time. But there have also been other people with different subjective impressions, such as the one who was the Original every time. That one's subjective impression is "Original 100%!".

But what happens if each person tries to predict what will happen next by using their experimental results (plus maybe a 50% prior) as an expectation of what they think will happen in the next experiment? Then they'll be much more wrong, collectively, than if they stuck to what they knew about the mechanism than if they plugged in their subjective impression as mechanism. So even the "Original n/(n+1)!" person should assign probability 50% to thinking they're the Original after the next split; their summary of past observations has no force over how the experiment works, and since they already knew everything about how the experiment works, doesn't give them any evidence to actually update on.

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-23T04:56:59.466Z · LW · GW

What? Yeah, still missing something.

You know that "probability" doesn't mean "a number which, if we redid the experiment and substituted 'pick randomly according to this number' instead of the actual casual mechanism, would give the same distribution of results"? That it's a summary of knowledge, not a casual mechanism?

(I'm still trying to figure out where I think you're confused; from my perspective you keep saying "obviously X, that was never in question, but since X is fundamentally different than X, we can't just assume the same result holds". Not trying to make fun, just literally expressing my confusion about the stuff you're writing. In any case, you're definitely right about not being able to communicate what you're talking about very well ;) )

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-22T00:44:54.460Z · LW · GW

I am convinced that you are confused but I have no idea how to figure out exactly what you're confused about. My best guess is that you don't agree that "a quantification of your uncertainty about propositions" is a good description of probabilities. Regardless, I think that e.g. Measure's objection is better phrased than mine.

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-22T00:28:17.162Z · LW · GW

Justifications for 50% being the correct answer:

  • if this happened lots of times, and you answered randomly, you would be right roughly 50% of the time
  • if you tried to make a wager, a bookie would give you near-1:1 odds
  • 50% is the correct answer to all of the equivalent questions which you accept are probabilities

:shrug:

Comment by GuySrinivasan on The Validity of Self-Locating Probabilities · 2021-08-21T03:32:51.164Z · LW · GW

I don't understand this argument.

The question is asking about a particular person: "I". This reference is inherently understood from my perspective. "I" is the one most immediate to the subjective experience. It is not identified by any objective difference or underlying mechanics. "Who I am" is primitive. There is no way to formulate a probability for it being the Original or the Clone.

This paragraph. Here's where you lost me.

What if the question was "what is the probability that am I the causal descendant of the Original, that in a world without the mad scientist I would still exist?" Is that different than "what is the probability that I am the Original?" If so, how? If not, what's the difference?

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-20T15:44:38.962Z · LW · GW

I've got a very nice, but not 100% precise, fit to Solar with a 28-day sine plus a 9-day sine. Barring its 4 jumps of course. I don't see any particular reason to guess it will be away-from-normal on T=384, and the 28+9 predicts Solar=44 then, with a +-5 (closer to +-4 or 3).

So we have a choice between:
- Ocean+Earth: 74-80, probably 77, guaranteed no demon
- Solar+Doom: [39,49]+[28,33] + maybe Solar spike + maybe Doom spike + maaaaybe Doom dip is probably 74, tiny chance of a demon, small chance of ritual failure, small chance of big success

The conservative choice is to just go with Ocean+Earth. But there is another option, of hoping for a Solar spike. These have been increasing in magnitude and the last one provided 20-25 extra mana. If Morgan can prepare the ritual and continue to monitor flows, he could check for whether Solar is significantly above baseline (we'd give him the decision procedure) and then either go ahead with the ritual or cancel.

I think there is a low chance of a spike, without further insights. Lower than 10%. Higher than 1, even 2%. So roughly 4+%, with a tiny helping of Doom spike, call it 5%. And even if there's a spike, maybe it's only 10-15 or 20-25 rather than something larger. Let's say 0.75% chance of a large spike, 1.5% chance of a 20-25 spike, and 2.5% chance of a 10-15 spike.

My answer to Morgan is that he can have a guaranteed 74-80 mana, or a chance of 84-95, or a chance of 94-105, or a chance of (making this up) 104-130. I will ask at what chance he would rather each of the other alternatives, then convert that to elicited preference, multiply, add, compare, and give him either the Ocean+Earth combo or the Solar+Doom combo+procedure. If he "chooses" Solar+Doom and then balks at the procedure, I will relent and give him Ocean+Earth.

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-20T06:16:14.982Z · LW · GW

So Solar starts with a 28-day cycle that looks vaguely like Ocean+Earth's shape. But it starts getting little distortions, and then some really big ones around day 120-220. Then it's fairly tame (looks like original!) except for a big spike around day 265, and a really big spike around day 367 which has just died down. Seems like the thing to do is try to model the "regular" variation, subtract it out, and analyze the remains. I will try averaging the normalish looking cycles.

Natural cycle peaks appear to be on days [20, 48, 76, 104, 132, 160, 188, 216, 244, 272, 300, 328, 356, 384]; T=384 is a natural solar peak. Normalish-between-peaks subtracted out... leaves, ah ha, a very clear 9-day cyclic thing. To investigate later, but that's why the Solar-Lunar looked 9-dayish.

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-20T05:45:02.905Z · LW · GW

Also! Solar's initial pattern, presumably before the supernova, looks similar to Ocean+Earth's, but on a different scale, 14 rather than 22 days it looks like? Or maybe 13?? And Solar-Lunar has a 9-period, but we know Lunar is Solar shifted 14, so what does that even mean.

Solar looks like the important thing to figure out, here. Our realistic options are:
- Choose Ocean and Earth. No demon, guaranteed ritual, strength 74-80, most likely 77.
- Choose Solar plus Doom. If we can confidently predict Solar as large, with Doom contributing 30ish, Solar would be 50+ (because otherwise why not stick with O+E, so Doom's black swans won't exceed Solar unless they're like really black. No demon, extremely likely ritual, strength ?? how big will Solar be.

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-20T05:25:16.014Z · LW · GW

Fixed the Ocean+Earth base pattern above. Also, Earth is nearly 4 higher than Ocean on average; I suspect that Ocean+Earth follows that base pattern and then Earth gets +0-6 in some way I haven't divined. I have not found any predictors of how much of the total goes to Ocean vs Earth on any given day; I suspect there may be a cycle of some sort which is significantly faster than Morgan's measurement period, which makes the shares appear uniform random.

Correlations near 0.42 suggest Lunar and Flame may be random walks? Highly speculative.

Perhaps important: Morgan can act on data collected during the next 9 days! If the ritual setup has the potential for Evil Demonhood, and more data will determine whether it will go awry, then he can just cancel and not do the ritual. This is worth doing only if we think that there is an enormously great result available with high probability but with Evil Demonhood as a small chance, discernable later, and if the next best alternative is something like a 50/50 of fine result and ritual fizzling.

How about actual values on T=384?
Breeze, Ash, Flame, Spite, Void: smol
Lunar: 16
Solar: ?? plz find
Ocean: uniformish random but with some min/max cutoffs up to 74
Earth: uniformish random but with some min/max cutoffs up to 74, plus 0 to 6, O+E=74
Doom: almost certainly 28-33, probably 30, but with black swans

Comment by GuySrinivasan on Thinking of our epistemically troubled friend · 2021-08-17T17:30:43.093Z · LW · GW

Did you ever spell out the meta and ask them? "It seems to me like you believe false things too easily. Obviously it doesn't seem like that to you. Is there a test we could do that would convince me you're right or convince you I'm right depending on how the test turned out? Like, maybe you could pick 20 beliefs that aren't mainstream and guess how many will hold up if we investigate closely, together, and I could also guess, and if you guess something like 18 and I guess something like 4 and we investigate and jointly decide the right answer was 15 then I admit I have a problem or if we jointly decide the right answer was 5 then you admit you have a problem?"

In practice I could see this working quite well with many of my friends who ... don't believe false things too easily. And it would be a "look at you weirdly and walk away non-starter" for the folks I know who I think do believe false things too easily. So ??? but I'm still curious whether you did try direct communication about the meta rather than a grab bag of concretes.

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-17T16:25:55.187Z · LW · GW

Sweet, I didn't get much of a chance last weekend. :)

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-17T06:33:43.956Z · LW · GW

Lunar[t] = 75 - Solar[t-14]

Ash[t] = floor(Flame[t-1]/4)

Ocean+Earth equals [56,57,58,59,60,61,62,63,64,65,66,65,64,63,62,61,60,59,58,57,56,55] repeating starting with day 1, plus up to 6 (so 7 possible values total) on any given day; guessing the residuals are a predictable cycle but haven't looked yet. Edit: untrue, different base pattern. Edit2: Fixed base is [56,58,61,64,65,65,66,69,72,74,75,74,72,69,66,65,65,64,61,58,56,55]

Doom takes on one of 6 values in a predictable length-8 pattern except for 4 deviants.

Comment by GuySrinivasan on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-16T15:17:51.342Z · LW · GW

Starting my thread. Haven't gotten much chance to dig in yet; hopefully will before the week is out.

Spite is 100% deterministic. Add these four components to get its value on any day. No supernova effects. Will be 0 on T=384.
spite1 = cycle([0,7,0,0,0])
spite2 = cycle([0,0,6,0,0,0,0,0,0,0,0,0,0,0])
spite3 = cycle([13,0,0,0])
spite4 = cycle([0,0,0,0,18,0,0])

Solar and Lunar are highly related. Solar is always more than Lunar. Solar - Luna has a strong 9-day cycle; subtracting out its mean leaves a fairly strong pattern of outliers where Solar is even stronger, predicting that Solar will be "even stronger" on T=384. Needs more scrutiny.

Comment by GuySrinivasan on Two AI-risk-related game design ideas · 2021-08-05T16:12:09.951Z · LW · GW

Summon Greater Player is hilarious. I love it. Bootstrap by adding AIs who cheat?

Comment by GuySrinivasan on Happy paths and the planning fallacy · 2021-07-22T20:46:49.990Z · LW · GW

Converges in the limit, we're all good here.

Comment by GuySrinivasan on Ask Not "How Are You Doing?" · 2021-07-22T00:52:09.224Z · LW · GW

I hate the greeting.

Sometimes, an honest answer feels like a social gaffe.

Depending on your situation, more like always an honest answer is in fact a social gaffe.

Comment by GuySrinivasan on Happy paths and the planning fallacy · 2021-07-19T22:40:43.055Z · LW · GW

In software development, I take joy from (honestly) reporting credible intervals that are far too large for anyone's comfort.

"Does it help?" you ask.

Well, joy is a good thing.

Comment by GuySrinivasan on Happy paths and the planning fallacy · 2021-07-19T00:09:39.553Z · LW · GW

In our house we heuristic over this problem by, and I quote, "accounting for the planning fallacy twice". This works very well for us in practice.

Comment by GuySrinivasan on Covid 7/15: Rates of Change · 2021-07-18T15:55:34.691Z · LW · GW

Do you have a tldr on why we might think anti-vaxxers were right for the right reasons? Seems like the default positions are "vaccines have obviously worked in the past and we're pretty sure they're gonna work in very similar ways today", and I haven't seen anything that changes my opinion much about either of those defaults.

Comment by GuySrinivasan on The Mountaineer's Fallacy · 2021-07-18T02:38:32.162Z · LW · GW

Mountaineer's Fallacy Fallacy - working on the real problem ineffectually when the right move is working on adjacent problems that might be pointless in order to chip away at the edges until the key components of solving the real problem are actually visible and tractable.

But yeah the OG MF is common.

Comment by GuySrinivasan on A (somewhat beta) site for embedding betting odds in your writing · 2021-07-02T04:27:32.324Z · LW · GW

I love this and intend to use it to flex at work until and unless HR tells me to stop because something something gambling something.

Comment by GuySrinivasan on A (somewhat beta) site for embedding betting odds in your writing · 2021-07-02T04:26:27.187Z · LW · GW

The signup was slightly confusing. Entering a user name and password, the but about adding an email said "but first, log in!" and there were two buttons, a "log in" and a "sign up" button. Clicking "log in" said I couldn't because I needed to sign up. So I clicked "sign up" and then the hiccup was over.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021 Evaluation and Ruleset · 2021-06-30T14:43:31.979Z · LW · GW

This was a fun one! Post-mortem:

Biggest miss is that I failed to guess that the distributions were identical up to a constant person/resonance multiplier.  Actually I did guess that initially, but decided it probably wasn't true, which is why it's my biggest miss. I started thinking it when Maria hit more x1s than x0s on Gamma while Janelle hit more x0s than x1s, leading me to think that there were more person-dependent factors than just one. IIRC the nail in the coffin of that theory was looking at Epsilon resonance. It was clear that Epsilon was non-random and inspecting the overlapping areas of the curve showed (e.g.)

A|Janelle|Maria|multiplier
-----
0.28|0.17|0.21|0.86
1.28|0.20|0.26|0.77

Later, I lowered my confidence in Will's Epsilon prediction because I knew our instruments have limited precision. I didn't connect that with "maybe 0.77 vs 0.86 isn't that far off given imprecision!".

Also I think I was modeling the precision incorrectly, probably. I took "for example, since they say Earwax has an amplitude of 3.2 kCept, you can be 100% sure the true value is between 3.15 and 3.25 kCept" to mean that every value could be plus or minus 0.05, but I think now it actually meant that values were rounded to the nearest digit shown, so a listed value of 0.28 kCept was not between 0.23 and 0.33, but rather between 0.275 and 0.285?

Biggest hit, I think, was correctly determining Janelle's actual chances: I said 25% win, 39% double; actual was 37% win, 30% double. Method was seeing graphs that were clearly 5 linear trends by power, estimating the zero, estimating their slopes, and noticing the multiplier.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-24T16:23:04.816Z · LW · GW

I completely forgot that since Earwax's actions are unprecedented, we're not entirely confident of its amplitude remaining constant either! Janelle's Gamma has basically the same characteristics at various amplitudes. Will's Epsilon works significantly less often if the new amplitude becomes smaller. A bit more reason to stick with Janelle here.

I think the only big remaining two things that could convince me to switch to Will are (a) figuring out a time-based/less-significant-digits-based pattern which tells us that Janelle's Gamma will have a poor k today/at 3.2 amplitude, or (b) figuring out a simple theory giving the cubic (or whichever) for Maria and Janelle that predicts Will's Epsilon will always win, removing the model uncertainty and the precision uncertainty.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-24T05:56:03.807Z · LW · GW

Using Python in a Jupyter notebook. Seaborn has a fantastic little function to quickly see pairwise graphs, it's great to begin with. Here's what Maria looks like after sns.pairplot(df[df.name=='Maria N.']): https://drive.google.com/file/d/12Q_11ZTPnyak87EXO89Vbg4am3TXc4px/view?usp=sharing

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-24T05:49:17.607Z · LW · GW

Summary: Send Janelle, using Gamma Resonance. Great chance of winning, maybe 60-70%, and half the time you win you double also. Honorable mention to Will's Epsilon Resonance, which if we had more data or a better theory we might be convinced could win 100% of the time, but we just don't have the data or theory to justify it yet.

Alpha: Maria does not show any real amplitude-dependent EFS, and no potential pilot shows 3.2+ EFS. Reject.

Beta: No amplitude-dependence. Maria and Janelle look pretty similar, but all the trainees do far less well, so this is person-dependent. As such probably Janelle's chances of winning are probably best estimated using only her data? Either way, I get somewhere between a 2%-3.5% chance of winning for Janelle and nothing for trainees. We can do better.

Gamma: Very clear dependence on amplitude A, which is good. Each person has some base value B; Maria's is 0.66, Janelle's is 0.89. The EFS generated is of the form B x (1 + k x A), for observed values of k from 0 to 4, and maybe one day we'll see higher. I don't see a way to predict k, but this is quite promising for Janelle. k=0 loses, k=1 wins, and k=2, 3, or 4 wins and doubles. That's a 64% chance of winning overall, with 39% (60% of the time given that we win) of doubling. Going off of observed frequency of Janelle hitting various k; the relative ratios are different enough from Maria's that it's not clear we can combine them in any nice way. They both show large k=0,1,2 and small k=3 and tiny k=4. (None of the trainees has a higher B.)

Delta: Maria's EFS shows a linear upward trend dependent on amplitude. Janelle's is too low to be interesting, and the trainees' are all very low and none suggest having a super-positive slope. Reject.

Epsilon: Ooookay this one's interesting. I didn't get sin to fit as well as a cubic, and I did get a cubic form that can be described with just one parameter varying per Maria vs Janelle, which I thought was likely given how Gamma worked, which means we can generate the entire cubic for Will and check how it does at A=3.2 and... it predicts 3.31 EFS. The Epsilon Resonance is entirely predictable given an amplitude. However, there are two big problems with simply sending Will to use Epsilon. First, even if our model is precisely right, the precision of our instruments is not perfect, and once we take that into account, Will's Epsilon EFS predictions vary a fair amount, leading to only about a 60% chance of winning. Second, our model is almost certainly wrong, because we haven't found a simple model. So Janelle's Gamma is better than Will's Epsilon in every way according to our current uncertainty.

Zeta: You might get 0, or your base Z, or rarely 3.5 x Z. The problem is that Janelle's 3.5 x Z only just barely beats 3.2 (and doesn't beat 3.25!), and she gets 3.5 x Z maybe like 5% of the time which is << 64%. While Corazon's 3.5 x Z would handily double, it loses otherwise. Again << 64%.

Eta: Janelle's too small here, but Flint has a 2.3, and it looks like the possible EFS's are a base E, or x1.5, or x1.5x1.5, or x1.5x1.5x1.125, or 1.5x1.5x1.125x1.25. If Flint's E=2.3, this might be promising. Unfortunately it seems much more likely that 2.3 is one of the multiples, and if there is an amplitude dependence, it's probably one of the high multiples.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-23T16:07:45.039Z · LW · GW

I am liking this one a lot. There are enough hints, some obvious and others more subtle, that indicate many resonance strengths are simpler than they appear at first. TBD: whether I'm reading too much into the data and seeing more hints than actually exist.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-22T17:45:10.045Z · LW · GW

Oh! Branch-Loop Analysis is much different than I expected. Good to know.

Comment by GuySrinivasan on D&D.Sci(-Fi) June 2021: The Duel with Earwax · 2021-06-22T15:57:38.561Z · LW · GW

Did we not record which resonance each pilot actually used each time? Usually it's clear

and holistically it's even more clear, but

it'd be nice to have confirmation in those cases where we have all the counterfactual EFS's and I'm pretty sure that data should be available.

Comment by GuySrinivasan on Reply to Nate Soares on Dolphins · 2021-06-18T05:14:34.734Z · LW · GW

Feedback:

"please don't shitpost and when you engage with me please avoid all attempts at humor because these pattern-match to ways I am abused and if you do those things even if in good faith it will only hurt our communication, perhaps disastrously, never help" would, I think, cover basically everything you want to cover without also signaling that it will be extremely emotionally draining to engage with you.

OTOH if it will be extremely emotionally draining to engage with you then you have successfully signaled that.

Possibly this isn't fair but I'm pretty sure it's an accurate reading.

Comment by GuySrinivasan on Is the length of the Covid-19 incubation period likely to be affected by whether you are vaccinated? · 2021-06-17T18:45:09.419Z · LW · GW

I also live with an immunocompromised individual who cannot be successfully vaccinated. After research including reasoning very similar to yours, we concluded that if she wore a mask, we felt safe enough with vaccinated folks who had not had any obvious infection opportunities within the past 12 hours to be indoors for substantial periods of time with known-vaccinated folks not wearing a mask. This tracks almost exactly with your guess of "MIGHT be roughly 1.5 days[...] Maybe hours?"

Comment by GuySrinivasan on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-14T13:45:27.328Z · LW · GW

I know about 16 vaccinated people I who I expect would have noticed+told me about easily noticeable >3 days side effects if they had occurred. 0/16 told me. Most mid-30s, two 60s. 40% W, 60% M. 80% basically healthy, 20% with significant health issues. Several groggy/sore/etc for a day, three very much so. Not sure about type distribution, though predominantly Pfizer.

Comment by GuySrinivasan on TEAM: a dramatically improved form of therapy · 2021-06-01T03:12:29.186Z · LW · GW

The single most important thing I got from PJ Eby was the "what's good about that?" question.

Comment by GuySrinivasan on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-26T15:47:06.112Z · LW · GW

The Jewel Beetle was weird. It was what, like 8% to auto-win everything by winning the Beetle? Except there was just one roll, overall. So in each group of four, one person auto-wins, and then it becomes a cross-group auction where whoever got the Beetle for way less ends up winning. Seems like with very few people participating overall, going for the Beetle caps your odds of winning at 8%, which is not great. With very many people participating, like 100, going for the Beetle caps your odds of winning at the chance there is not a cohort with pure non-beetlers, otherwise whichever of them wins the beetle probably just wins.

Comment by GuySrinivasan on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-24T23:59:48.920Z · LW · GW

What I wrote to abstractapplic:

Vague price sense > guessing how others might bid > guarding against someone aiming for significantly higher ROI than you did > exact price sense, I think?

Comment by GuySrinivasan on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-24T19:58:41.191Z · LW · GW

Here are the average profits and win rates if we re-ran the sim many times:

bidderavg_profitprob_winbids
A2 sp0.5%[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
B2 sp0.5%[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
C49 sp18.4%[76, 36, 13, 26, 21, 51, 18, 13, 9, 12, 9, 18, 102, 21, 26, 31, 13, 41, 13, 36]
D43 sp7.0%[50, 36, 13, 25, 19, 62, 17, 13, 1, 11, 1, 19, 85, 20, 20, 20, 13, 10, 13, 20]
E52 sp25.6%[73, 34, 15, 21, 17, 57, 16, 15, 6, 11, 6, 15, 8, 17, 28, 31, 15, 42, 15, 34]
F37 sp18.1%[73, 35, 14, 22, 14, 63, 16, 7, 5, 10, 2, 18, 4, 14, 29, 29, 10, 44, 14, 34]
G59 sp20.0%[71, 33, 16, 24, 20, 51, 19, 16, 9, 15, 9, 18, 12, 20, 26, 31, 16, 42, 16, 33]
NPC42 sp9.9%[]

I am bidder E. Whoever bidder G is, they make more profit than I do on average, but I win 25% of the time and they only win 20% of the time.

My method was to interleave two bidding strategies, trying to either spend 300sp at a decent ROI, or spend less on an ROI high enough to beat whoever ended up spending 300sp at a not-quite-as-good-ROI-as-my-original-target.

I am favored to win all heads-up matches, but I never am never favored to win a 4-way with real players. G wins all but one of those, and F wins more 2nd-place finishes in those than I do. So I'm heavily reliant on A/B/NPC being present to make the matchups look more like head-to-head than 4-way.

Comment by GuySrinivasan on Finite Factored Sets · 2021-05-23T23:37:09.565Z · LW · GW

How did you count the number of factorizations of sets of size 0-25?

Comment by GuySrinivasan on What's your probability that the concept of probability makes sense? · 2021-05-22T22:20:22.838Z · LW · GW

Which concept of probability? :D

Mine is ~1.0, just like my probability that Newton's Laws make sense.