Posts

robo's Shortform 2024-03-27T09:52:58.579Z

Comments

Comment by robo on Which Biases are most important to Overcome? · 2024-12-01T20:34:03.042Z · LW · GW

Conjunction Fallacy.  Adding detail make ideas feel more realistic, and strictly less likely to be true.

Virtues for communication and thought can be diametrically opposed.

Comment by robo on yams's Shortform · 2024-11-24T13:48:39.744Z · LW · GW

In a world where AI progress has wildly accelerated chip manufacture

 

This world?

Comment by robo on Which things were you surprised to learn are not metaphors? · 2024-11-23T00:58:44.135Z · LW · GW

What distinction are you making between "visualising" and "seeing"?

Good question!  By "seeing" I meant having qualia, an apparent subjective experience.  By "visualizing" I meant...something like using the geometric intuitions you get by looking at stuff, but perhaps in a philosophical zombie sort of way?  You could use non-visual intuitions to count the vertices on a polyhedron, like algebraic intuitions or 3D tactile intuitions (and I bet blind mathematicians do).  I'm not using those.  I'm thinking about a wireframe image, drawn flat.

I'm visualizing a rhombicosidodecahedron right now.  If I ask myself "The pentagon on the right and the one hiding from view on the left -- are they the same orientation?", I'll think "ahh, let's see...  The pentagon on the right connects through the squares to those three pentagons there, which interlock with those 2/4 pentagons there, which connect through squares to the one on the left, which, no, that left one is upside-down compared to the one on the right -- the middle interlocking pentagons rotated the left assembly 36° compared to the right".  Or ask "that square between the right pentagon and the pentagon at 10:20 above it <mental point>.  Does perspective mean the square's drawn as a diamond, or a skewed rectangle, weird quadrilateral?" and I think "Nah, not diamond shaped -- it's a pretty rectangular trapezoid.  The base is maybe 1.8x height?  Though I'm not too good at guessing aspect ratios?  Seems like I if I rotate the trapezoid I can fit 2 into the base but go over by a bit?"

I'm putting into words a thought process which is very visual, BUT there is almost no inner cinema going along with those visualizations.  At most ghostly, wispy images, if that.  A bit like the fleeting oscillating visual feeling you get when your left and right eyes are shown different colors?

Comment by robo on Which things were you surprised to learn are not metaphors? · 2024-11-22T18:55:12.529Z · LW · GW

...I do not believe this test.  I'd be very good at counting vertices on a polyhedron through visualization and very bad at experiencing the sensation of seeing it.  I do "visualize" the polyhedra, but I don't "see" them.  (Frankly I suspect people who say they experience "seeing" images are just fooling themselves based on e.g. asking them to visualize a bicycle and having them draw it)

Comment by robo on Announcing turntrout.com, my new digital home · 2024-11-18T15:38:11.541Z · LW · GW

Thanks for crossposting!  I've highly appreciated your contributions and am glad I'll continue to be able to see them.

Comment by robo on Species as Canonical Referents of Super-Organisms · 2024-10-18T09:35:29.582Z · LW · GW

Quick summary of a reason why constituent parts like of super-organisms, like the ant of ant colonies, the cells of multicellular organisms, and endosymbiotic organelles within cells[1] are evolutionarily incentivized to work together as a unit:

Question: why do ants seem to care more about the colony than themselves?  Answer: reproduction in an ant colony is funneled through the queen.  If the worker ant wants to reproduce its genes, it can't do that by being selfish.  It has to help the queen reproduce.  Genes in ant workers have nothing to gain by making their ant more selfish and have much to gain by making their worker protect the queen.

This is similar to why cells in your pancreas cooperate with cells in your ear.  Reproduction of genes in the body is funned through gametes.  Somatic evolution does pressure the cells in your pancreas to reproduce selfishly at the expense of cells in your ear (this is pancreatic cancer).  But that doesn't help the pancreas genes long term.  Pancreas-genes and the ear-genes are forced to cooperate with each other because they can only reproduce when bound together in a gamete.

This sort of bounding together of genes making disperate things cooperate and act like a "super organism" is absent in members of a species.  My genes do not reproduce in concert with your genes.  If my genes figure out a way to reproduce at your expense, so much the better for them.

  1. ^

    Like mitochondria and chloroplasts, which were separate organisms but evolved to work so close with their hosts that they are now considered part of the same organism.

Comment by robo on Species as Canonical Referents of Super-Organisms · 2024-10-18T09:10:30.256Z · LW · GW

EDIT Completely rewritten to be hopefully less condescending.

There are lessons from group selection and the extended phenotype which vaguely reduce to "beware thinking about species as organisms".  It is not clear from this essay whether you've encountered those ideas.  It would be helpful for me reading this essay to know if you have.

Comment by robo on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T19:47:21.644Z · LW · GW

Hijacking this thread, has anybody worked through Ape in the coat's anthropic posts and understood / gotten stuff out of them?  It's something I might want to do sometime in my copious free time but haven't worked up to it yet.

Comment by robo on Dan Braun's Shortform · 2024-10-07T05:35:35.985Z · LW · GW

Sorry, that was an off-the-cuff example I meant to help gesture towards the main idea.  I didn't mean to imply it's a working instance (it's not).  The idea I'm going for is:

  • I'm expecting future AIs to be less single LLMs (like Llama) and more loops and search and scaffolding (like o1)
  • Those AIs will be composed of individual pieces
  • Maybe we can try making the AI pieces mutually dependent in such a way that it's a pain to get the AI working at peak performance unless you include the safety pieces
Comment by robo on Dan Braun's Shortform · 2024-10-05T16:30:49.159Z · LW · GW

This might be a reason to try to design AI's to fail-safe and break without controlling units.  E.g. before fine-tuning language models to be useful, fine-tune them to not generate useful content without approval tokens generated by a supervisory model.

Comment by robo on Making Eggs Without Ovaries · 2024-09-23T00:41:01.644Z · LW · GW

I suspect experiments with almost-genetically identical twin tests might advance our understanding about almost all genes except sex chromosomes.

Sex chromosomes are independent coin flips with huge effect sizes.  That's amazing!  Natural provided us with experiments everywhere!  Most alleles are confounded (e.g.. correlated with socioeconomic status for no causal reason) and have very small effect sizes.

Example: Imagine an allele which is common in east asians, uncommon in europeans, and makes people 1.1 mm taller.  Even though the allele causally makes people taller, the average height of the people with the allele (mostly asian) would be less than the average height of the people without the allele (mostly European).  The +1.1 mm in causal height gain would be drowned out by the ≈-50 mm in Simpson's paradox.  Your almost-twin experiment gives signal where observational regression gives error.

That's not needed for sex differences.  Poor people tend to have poor children.  Caucasian people tend to have Caucasian children.  Male people do not tend to have male children.  It's pretty easy to extract signal about sex differences.

(far from my area of expertise)

Comment by robo on ErioirE's shortform: · 2024-09-12T08:39:50.823Z · LW · GW

The player can force a strategy where they win 2/3 of the time (guess a door and never switch).  The player never needs to accept worse

The host can force a strategy where the player loses 1/3 of the time (never let the player switch).  The host never needs to accept worse.

Therefore, the equilibrium has 2/3 win for the player.  The player can block this number from going lower and the host can block this number from going higher.

Comment by robo on James Camacho's Shortform · 2024-08-29T15:31:10.325Z · LW · GW

I want to love this metaphor but don't get it at all.  Religious freedom isn't a narrow valley; it's an enormous Shelling hyperplane.  85% of people are religious, but no majority is Christian or Hindu or Kuvah'magh or Kraẞël or Ŧ̈ř̈ȧ̈ӎ͛ṽ̥ŧ̊ħ or Sisters of the Screaming Nightshroud of Ɀ̈ӊ͢Ṩ͎̈Ⱦ̸Ḥ̛͑..  These religions don't agree on many things, but they all pull for freedom of religion over the crazy *#%! the other religions want.

Comment by robo on robo's Shortform · 2024-08-27T09:27:45.440Z · LW · GW

Suppose there were some gears in physics we weren't smart enough to understand at all.  What would that look like to us?

It would look like phenomena that appears intrinsically random, wouldn't it?  Like imagine there were a simple rule about the spin of electrons that we just. don't. get.  Instead noticing the simple pattern ("Electrons are up if the number of Planck timesteps since the beginning of the universe is a multiple of 3"), we'd only be able to figure out statistical rules of thumb for our measurements ("we measure electrons as up 1/3 of the time").

My intuitions conflict here.  One the one hand, I totally expect there to be phenomena in physics we just don't get.  On the other hand, the research programs you might undertake under those conditions (collect phenomena which appear intrinsically random and search for patterns) feel like crackpottery.

Maybe I should put more weight on superdetermism.

Comment by robo on Abhimanyu Pallavi Sudhir's Shortform · 2024-08-11T08:26:37.747Z · LW · GW

Humans are computationally bounded, Bayes is not.  In an ideal Bayesian perspective:

  • Your prior must include all possible theories a priori.  Before you opened your eyes as a baby, you put some probability of being in a universe with Quantum Field Theory with  gauge symmetry and updated from there.
  • Your update with unbounded computation.  There's not such thing as proofs, since all poofs are tautological.

Humans are computationally bounded and can't think this way.

(riffing)

"Ideas" find paradigms for modeling the universe that may be profitable to track under limited computation.  Maybe you could understand fluid behavior better if you kept track of temperature, or understand biology better if you keep track of vital force.  With a bayesian-lite perspective, they kinda give you a prior and places to look where your beliefs are "mailable".

"Proofs" (and evidence) are the justifications for answers.  With a bayesian-lite perspective, they kinda give you conditional probabilities.

"Answers" are useful because they can become precomputed, reified, cached beliefs with high credence inertial you can treat as approximately atomic.  In a tabletop physics experiment, you can ignore how your apparatus will gravitationally move the earth (and the details of the composition of the earth).  Similarly, you can ignore how the tabletop physics experiment will move you belief about the conservation of energy (and the details of why your credences about the conservation of energy are what they are).

Comment by robo on Dan Hendrycks and EA · 2024-08-03T16:46:08.887Z · LW · GW

Statements made to the media pass through an extremely lossy compression channel, then are coarse-grained, and then turned into speech acts.

That lossy channel has maybe one bit of capacity on the EA thing.  You can turn on a bit that says "your opinions about AI risk should cluster with your opinions about Effective Altruists", or not.  You don't get more nuance than that.[1]

If you have to choose between outputting the more informative speech act[2] and saying something literally true, it's more cooperative to get the output speech act correct.

(This is different from the supreme court case, where I would agree with you)

  1. ^

    I'm not sure you could make the other side of the channel say "Dan Hendrycks is EA adjacent but that's not particularly necessary for his argument" even if you spent your whole bandwidth budget trying to explain that one message.

  2. ^
Comment by robo on Dan Hendrycks and EA · 2024-08-03T14:17:10.145Z · LW · GW

If someone wants to distance themselves from a group, I don't think you should make a fuss about it.  Guilt by association is the rule in PR and that's terrible.  If someone doesn't want to be publicly coupled, don't couple them.

Comment by robo on lemonhope's Shortform · 2024-07-31T10:24:13.089Z · LW · GW
Comment by robo on Me & My Clone · 2024-07-18T22:10:15.037Z · LW · GW

I think the classic answer to the "Ozma Problem" (how to communicate to far-away aliens what earthlings mean by right and left) is the Wu experiment.  Electromagnetism and the strong nuclear force aren't handed, but the weak nuclear force is handed.  Left-handed electrons participate in weak nuclear force interactions but right-handed electrons are invisible to weak interactions[1].

(amateur, others can correct me)

  1. ^

    Like electrons, right-handed neutrinos are also invisible to weak interactions.  Unlike electrons, neutrinos are also invisible to the other forces*[2].  So the standard model basically predicts there should invisible particles wizzing around everywhere that we have no way to detect or confirm exist at all.  

  2. ^

    Besides gravity

Comment by robo on Me & My Clone · 2024-07-18T21:11:17.572Z · LW · GW

Can you symmetrically put the atoms into that entangled state?  You both agree on the charge of electrons (you aren't antimatter annihilating), so you can get a pair of atoms into  |↑,↑⟩, but can you get the entangled pair to point in opposite directions along the plane of the mirror?

Edit Wait, I did that wrong, didn't I?  You don't make a spin up atom by putting it next to a particle accelerator sending electrons up.  You make a spin up atom by putting it next to electrons you accelerate in circles, moving the electrons in the direction your fingers point when a (real) right thumb is pointing up.  So one of you will make a spin-up atom and the other will make a spin-down atom.

Comment by robo on lukemarks's Shortform · 2024-07-02T09:56:18.128Z · LW · GW

No, that's a very different problem.  The matrix overlords are Laplace's demon, with god-like omniscience about the present and past.  The matrix overlords know the position and momentum of every molecule in my cup of tea.  They can look up the microstate of any time in the past, for free.

The future AI is not Laplace's demon.  The AI is informationally bounded.  It knows the temperature of my tea, but not the position and momentum of every molecule.  Any uncertainties it has about the state of my tea will increase exponentially when trying to predict into the future or retrodict into the past.  Figuring out which water molecules in my tea came from the kettle and which came from the milk is very hard, harder than figuring out which key encrypted a cypher-text.

Comment by robo on lukemarks's Shortform · 2024-07-02T09:07:23.757Z · LW · GW

Oh, wait, is this "How does a simulation keep secrets from the (computationally bounded) matrix overlords?"

Comment by robo on lukemarks's Shortform · 2024-07-02T08:37:07.254Z · LW · GW

I don't think I understand your hypothetical.  Is your hypothetical about a future AI which has:

  • Very accurate measurements of the state of the universe in the future
  • A large amount of compute, but not exponentially large
  • Very good algorithms for retrodicting* the past

I think it's exponentially hard to retrodict the past.  It's hard in a similar way as encryption is hard.  If an AI isn't power enough to break encryption, it also isn't powerful enough to retrodict the past accurately enough to break secrets.

If you really want to keep something secret from a future AI, I'd look at ways of ensuring the information needed to theoretically reconstruct your secret is carried away from the earth at the speed of light in infrared radiation.  Write the secret in sealed room, atomize the room to plasma, then cool the plasma by exposing it to the night sky.

*predicting is using your knowledge of the present to predict the state of the future.  Retrodicting is using your knowledge of the present to predict retrodict the state of the past

Comment by robo on davekasten's Shortform · 2024-07-01T08:00:57.673Z · LW · GW

I think a lot of people losing their jobs would probably do the trick, politics-wise.  For most people the crux is "will AIs will be more capable than humans", not "might AIs more capable than humans be dangerous".

Comment by robo on Sev, Sevteen, Sevty, Sevth · 2024-06-08T14:22:22.732Z · LW · GW

I do this (with "sev") when counting to myself.  Nice to see the other people chose the same shelling point!

Comment by robo on Benaya Koren's Shortform · 2024-06-06T06:40:13.793Z · LW · GW

Yearly rent on the house is greater than yearly taxes on the house, right?  As you give the government shares of your house, tax shares will convert into rental shares and you will have to pay the government more and more.  A death spiral ensues and you lose the house.

"What if the government doesn't charge rent on its shares?"  Then everyone lets the government own 99% of their house to avoid the taxes.

A lot of the value of Georgism is incentivizing people who don't value a property to move out so people who do value the property can move in.

(off-the-cuff opinion)

Comment by robo on jacquesthibs's Shortform · 2024-05-22T15:25:49.664Z · LW · GW

I'm very not sure how to do this, but are there ways to collect some counteracting or unbiased samples about Sam Altman?  Or to do another one-sided vetting for other CEOs to see what the base rate of being able to dig up questionable things is?  Collecting evidence in that points in only one direction just sets off huge warning lights 🚨🚨🚨🚨 I can't quiet.

Comment by robo on robo's Shortform · 2024-05-19T06:05:19.350Z · LW · GW

I think this is the sort of conversation we should be having!  [Side note: I think restricting compute is more effective than restricting research because you don't need 100% buy in.

  1. it's easier to prevent people from manufacturing semiconductors than to keep people from learning ideas that fit on a napkin
  2. It's easier to prevent scientists in Eaccistan from having GPUs than to prevent scientists in Eaccistan from thinking.

The analogy to nuclear weapons is, I think, a good one.  The science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.

(Restricting compute also seriously restricts research.  Research speed on neural nets is in large part bounded by how many experiments you run rather than ideas you have.)]

Comment by robo on To Limit Impact, Limit KL-Divergence · 2024-05-18T22:10:51.478Z · LW · GW

I think the weakness with KL divergence is that the potentially harmful model can do things the safe model would be exponentially unlikely to do.  Even if the safe model has a 1 in 1 trillion chance of stabbing me in the face, the KL penalty to stabbing me in the face is log(1 trillion) (and logs make even huge numbers small).

What about limiting the unknown model to chose one of the cumulative 98% most likely actions for the safe model to take?  If the safe model never has more than a 1% chance of taking an action that will kill you, then the unknown model won't be able to take an action that kills you.  This isn't terribly different from the Top-K sampling many language models use in practice.

Comment by robo on robo's Shortform · 2024-05-18T21:11:47.295Z · LW · GW

Our current big stupid: not preparing for 40% agreement

Epistemic status: lukewarm take from the gut (not brain) that feels rightish

The "Big Stupid" of the AI doomers 2013-2023 was AI nerds' solution to the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".  Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.  When the public turned out to be somewhat receptive to the idea of regulating AIs, doomers were unprepared.

Take: The "Big Stupid" of right now is still the same thing.  (We've not corrected enough).  Between now and transformative AGI we are likely to encounter a moment where 40% of people realize AIs really could take over (say if every month another 1% of the population loses their job).  If 40% of the world were as scared of AI loss-of-control as you, what could the world do? I think a lot!  Do we have a plan for then?

Almost every LessWrong post on AIs are about analyzing AIs.  Almost none are about how, given widespread public support, people/governments could stop bad AIs from being built.

[Example: if 40% of people were as worried about AI as I was, the US would treat GPU manufacture like uranium enrichment.  And fortunately GPU manufacture is hundreds of time harder than uranium enrichment!  We should be nerding out researching integrated circuit supply chains, choke points, foundry logistics in jurisdictions the US can't unilaterally sanction, that sort of thing.]

TLDR, stopping deadly AIs from being built needs less research on AIs and more research on how to stop AIs from being built.

*My research included 😬

Comment by robo on Why you should learn a musical instrument · 2024-05-16T14:43:46.672Z · LW · GW

How did training aural imagination go for you, 15 years later?

Comment by robo on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-15T07:28:48.942Z · LW · GW

My comment here is not cosmically important and I may delete it if it derails the conversation.

There are times when I would really want a friend to tap me on the shoulder and say "hey, from the outside the way you talk about <X> seems way worse than normal.  Are you hungry/tired/too emotionally close?".  They may be wrong, but often they're right.
If you (general reader you) would deeply want someone to tap you on the shoulder, read on, otherwise this comment isn't for you.

If you burn at NYT/Cade Metz intolerable hostile garbage, are you have not taken into account how defensive tribal instincts can cloud judgements, then, um <tap tap>?

Comment by robo on simeon_c's Shortform · 2024-05-10T22:57:34.779Z · LW · GW

I appreciate that you are not speaking loudly if you don't yet have anything loud to say.

Comment by robo on simeon_c's Shortform · 2024-05-10T18:36:50.026Z · LW · GW

Is that your family's net worth is $100 and you gave up $85?  Or your family's net worth is $15 and you gave up $85?

Either way, hats off!

Comment by robo on shortplav · 2024-04-21T07:26:45.496Z · LW · GW

How close would this rank a program p with a universal Turing machine simulating p?  My sense is not very close because the "same" computation steps on each program don't align.

My "most naïve formula for logical correlation" would be something like put a probability distribution on binary string inputs, treat  and  as random variables , and compute their mutual information.

Comment by robo on Tamsin Leake's Shortform · 2024-04-14T17:02:45.610Z · LW · GW

Interesting idea.
I don't think using a classical Turing machine in this way would be the right prior for the multiverse.  Classical Turing machines are a way for ape brains to think about computation using the circuitry we have available ("imagine other apes following these social contentions about marking long tapes of paper").  They aren't the cosmically simplest form of computation.  For example, the (microscopic non-course-grained) laws of physics are deeply time reversible, where Turing machines are not.
I suspect this computation speed prior would lead to Boltzmann-brain problems.  Your brain at this moment might be computed at high fidelity, but everything else in the universe would be approximated for the computational speed-up.

Comment by robo on A Dozen Ways to Get More Dakka · 2024-04-08T13:03:08.732Z · LW · GW

Counterpoint worth considering:

It's hard to get enough of something that almost works.

(Vincent Felitti, as quoted from In the Realm of Hungry Ghosts: Close Encounters with Addiction)

Comment by robo on robo's Shortform · 2024-03-27T09:52:58.688Z · LW · GW

"When you talk about the New York Times, rational thought does not appear to be holding the mic"

--Me, mentally, to many people in the rationalist/tech sphere since Scott took SlateStarCodex down.

Comment by robo on The Solution to Sleeping Beauty · 2024-03-05T13:51:19.456Z · LW · GW

Right, I read all that.  I still don't understand what it means to append two things to the list.

Here's how I understand modelLewis, modelElga, etc.

"This model represent the world as a probability distribution.  To get a more concrete sense of the world model, here's a function which generates a sample from that probability distribution"

Here's how I understand your model.

"This model represents the world as a ????, which like a probability distribution but different.  To get a concrete sense of the world model, here's a function which generates a sample from that probability distribution JUST KIDDING here's TWO samples".

Why can you generate two samples at once?  What does that even mean??  The world model isn't quite just a stationary probability distribution, fine, what is it then?  Your model isn't structured like other models, fine, but how is it structured?  I'm drowning in type errors.

EDIT and I'm suggesting be really concrete, if you can, if that will help.  Like come up with some concrete situation where Beauty makes a bet, or says a thing, ("Beauty woke up on Monday and said 'I think there's a 50% chance the coin came up on heads, and refuse to say there's a state of affairs about what day it presently is'") and explain what in her model made her make that bet or say that thing.  Or maybe draw a picture which what her brain looks like under that circumstance compared to other circumstances.

Comment by robo on The Solution to Sleeping Beauty · 2024-03-05T11:40:21.981Z · LW · GW

You don't have to reply, but FYI I don't understand what ListC represents (a total ordering of events defined by a logical clock?  A logical clock ordering Beauty's thoughts, or a logical clock ordering what causally can affect what, or logically affect what allowing for Newcomb-like situations?  Why is there a clock at all?), how ListC is used, what concatenating multiple entries to ListC means in terms of beliefs, etc.  If it's important for readers to understand this you might have to step us through (or point us to an earlier article where you stepped us through).

Comment by robo on The Solution to Sleeping Beauty · 2024-03-04T15:21:45.454Z · LW · GW

I don't understand what return ['Tails&Monday','Tails&Tuesday'] and ListC += outcome mean.  Can you explain it more?  Perhaps operationalize it into some specific way Sleeping Beauty should act in some situation?

For example, if Sleeping Beauty is allowed to make bets every time she is woken up, I claim she should bet as though she believes the coin came up with probability 1/3 for Heads and 2/3 for Tails (≈because over many iterations she'll find herself betting in situations where Tails is true twice as often as in situations where Heads is true).

I don't understand what your solution means for Sleeping Beauty.  The best operationalization I can think of for "Ω={Heads&Monday, Tails&Monday&Tuesday}" is something like:

"Thanks for participating in the experiment Ms. Beauty, and just FYI it's Tuesday"

"𝕬𝖓𝖉 𝕸𝖔𝖓𝖉𝖆𝖞"

"No, it's just Tuesday"

"𝕬𝖓𝖉 𝕸𝖔𝖓𝖉𝖆𝖞 𝖆𝖘 𝖜𝖊𝖑𝖑.  𝕿𝖍𝖊𝖗𝖊 𝖈𝖆𝖓 𝖇𝖊 𝖓𝖔 𝕿𝖚𝖊𝖘𝖉𝖆𝖞 𝖜𝖎𝖙𝖍𝖔𝖚𝖙 𝖆 𝕸𝖔𝖓𝖉𝖆𝖞"

"Yep, it was Monday yesterday.  Now it's Tuesday"

"𝕭𝖚𝖙 𝖎𝖙 𝖎𝖘 𝖆𝖑𝖘𝖔 𝕸𝖔𝖓𝖉𝖆𝖞, 𝖋𝖔𝖗 𝕴 𝖊𝖝𝖎𝖘𝖙 𝖆𝖘 𝖔𝖓𝖊 𝖎𝖓 𝖆𝖑𝖑 𝖕𝖔𝖎𝖓𝖙𝖘 𝖔𝖋 𝖙𝖎𝖒𝖊 𝖆𝖓𝖉 𝖘𝖕𝖆𝖈𝖊"

"Wow!  That explains the unearthly blue glow!"

"𝕴𝖓𝖉𝖊𝖊𝖉"

Comment by robo on Will quantum randomness affect the 2028 election? · 2024-01-25T12:07:51.845Z · LW · GW

Daniel Kahneman notes that if sperm are randomized, the chance of Hitler, Stalin, and Mao all being born boys is 1/8.  Re-run the 20th century with any of them being female and you get vastly different results.  That thought experiment makes me suspect our intuitions for the inevitability of history are faulty.