Posts

Organ donation versus cryonics 2011-03-13T16:08:52.588Z
Blues, Greens and abortion 2011-03-05T19:15:05.257Z
Frugality and working from finite data 2010-09-03T09:37:01.610Z

Comments

Comment by Snowyowl on AI Box Log · 2015-06-03T02:32:03.174Z · LW · GW

Three years late, but: there doesn't even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.

Comment by Snowyowl on TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers) · 2014-11-24T22:09:09.509Z · LW · GW

There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow... I haven't seen the episode in question, so I can't say who I think they took the biggest inspiration from.

Comment by Snowyowl on Initiation Ceremony · 2013-07-17T11:57:39.945Z · LW · GW

9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you'd get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.

Of course, it's irrelevant what the actual mistake was since the idea was to see if you'll let your biases sway you from the correct answer.

Comment by Snowyowl on Causal Universes · 2012-11-30T13:44:49.308Z · LW · GW

The later Ed Stories were better.

In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.

Good point, but not actually answering the question. I guess what I'm asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?

Taking your original example, there are only 2 states the timeline can be in:

  • A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance)
    *B: A murderous future self arrives from the future. You and him get into a fight, and don't send anything back. Next state: A (100% chance).

Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?

I'm inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the "next" timeline is the same as the probability distribution of the "current" timeline. In this case, that would be (A: 2/3, B: 1/3). (Your result of (A: 4/5, B: 1/5) seems strange to me: half of the people in A will become killers, and they're equal in number to their victims in B.)

There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it's irreducible). So in the first scenario, I'll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn't irreducible (if the "first" timeline has a CD with value X, it's impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.

Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it's more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.

Comment by Snowyowl on Causal Universes · 2012-11-30T13:18:39.983Z · LW · GW

I wasn't reasoning under NSCP, just trying to pick holes in cousin_it's model.

Though I'm interested in knowing why you think that one outcome is "more likely" than any other. What determines that?

Comment by Snowyowl on Causal Universes · 2012-11-29T08:14:45.658Z · LW · GW

You make a surprisingly convincing argument for people not being real.

Comment by Snowyowl on Causal Universes · 2012-11-29T08:11:10.178Z · LW · GW

Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.

Suppose you have a CD with infinite storage space - if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:

  • If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.

  • If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.

What is the probability distribution of the number on your CD? What is the probability that you didn't receive a CD from the future?

Once you've worked that one out, consider this similar algorithm:

  • If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.

  • If a CD arrives from the future, read the number on it. Call this number X. Write X on your own CD and send it back in time.

What is the probability distribution of the number on your CD? What is the probability that you didn't receive a CD from the future?

Comment by Snowyowl on Epilogue: Atonement (8/8) · 2012-11-27T16:32:49.723Z · LW · GW

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.

I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.

And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) And I do mean equal value - humans became partly superhappy, and superhappies became partly human. For two, because shared culture and psychology makes it possible to have meaningful dialogue between species: even with the Cultural Translator, everyone got headaches after five minutes. Remember that to the superhappies, meaningful communication is literally as good as sex.

Comment by Snowyowl on Causal Reference · 2012-11-05T10:39:46.760Z · LW · GW

I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.

Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.

Comment by Snowyowl on Rationality Quotes November 2012 · 2012-11-02T13:57:23.188Z · LW · GW

They came impressively close considering they didn't have any giant shoulders to stand on.

Comment by Snowyowl on Why Are Individual IQ Differences OK? · 2012-08-17T12:44:13.717Z · LW · GW

I think it's more the point that some of us have more dislikable alleles than others.

Comment by Snowyowl on Rationality Quotes June 2012 · 2012-06-26T19:32:50.498Z · LW · GW

Yeah, that should work.

Comment by Snowyowl on Rationality Quotes June 2012 · 2012-06-24T20:04:43.871Z · LW · GW

The latter one doesn't work at all, since it sounds rather like you're ignoring the very advice you're trying to give.

Comment by Snowyowl on Rationality Quotes June 2012 · 2012-06-24T19:59:48.043Z · LW · GW

I agree with Wilson's conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.

Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state "This map is incorrect in all circumstances", there are many which state "This map is correct in all circumstances"; you risk the Happy Death Spiral if you use one of the latter. (I should hope most of your maps state "This map is probably correct in these specific areas, and it may make predictions in other areas but those are less likely to be correct".) Having several contradictory maps can be useful; it teaches you that no map is perfect.

Comment by Snowyowl on Rationality Quotes June 2012 · 2012-06-24T19:47:24.853Z · LW · GW

Or accept that each map is relevant to a different area, and don't try to apply a map to a part of the territory that it wasn't designed for.

And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.

Comment by Snowyowl on Glenn Beck discusses the Singularity, cites SI researchers · 2012-06-15T09:21:16.345Z · LW · GW

Does it matter? People read Glenn Beck's books; this both raises awareness about the Singularity and makes it a more "mainstream" and popular thing to talk about.

Comment by Snowyowl on Rationality Quotes May 2012 · 2012-05-08T17:52:35.767Z · LW · GW

I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.

Comment by Snowyowl on Making Reasoning Obviously Locally Correct · 2011-03-13T15:59:43.872Z · LW · GW

Actually, x=y=0 still catches the same flaw, it just catches another one at the same time.

Comment by Snowyowl on Rationality Quotes: March 2011 · 2011-03-13T11:13:42.552Z · LW · GW

My personal philosophy in a nutshell.

Comment by Snowyowl on Rationality Quotes: March 2011 · 2011-03-13T11:12:51.543Z · LW · GW

Not all of them. Which applies to Old Testament gods too, I guess: the Bible is pretty consistent with that "no killing" thing.

Comment by Snowyowl on Rationality Quotes: March 2011 · 2011-03-13T11:10:17.237Z · LW · GW

Possible corollary: I can change my reality system by moving to another planet.

Comment by Snowyowl on Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling · 2011-03-08T12:18:08.792Z · LW · GW

How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?

Comment by Snowyowl on Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling · 2011-03-08T12:12:54.446Z · LW · GW

(1) I'm not hurting other people, only myself

But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-06T15:29:18.514Z · LW · GW

Well, I was at the time I wrote the comment. I wrote it specifically to get LW's opinions on the matter. I am now pro-choice.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-06T15:25:59.393Z · LW · GW

And doesn't our society consider that children can't make legally binding statements until they're 16 or 18l?

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-06T01:51:39.705Z · LW · GW

Oh for crying out loud. Please tell me it's fixed now.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T21:13:46.283Z · LW · GW

I think it's been blown rather out of proportion by political forces, so what you're describing seems very likely.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T21:11:48.083Z · LW · GW

Agreed.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T20:46:00.787Z · LW · GW

I reject treating human life, or preservation of the human life, as a "terminal goal" that outweighs the "intermediate goal" of human freedom.

Hmm... not a viewpoint that I share, but one that I empathise with easily. I approve of freedom because it allows people to make the choices that make them happy, and because choice itself makes them happy. So freedom is valuable to me because it leads to happiness.

I can see where you're coming from though. I suppose we can just accept that our utility functions are different but not contradictory, and move on.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T20:41:51.247Z · LW · GW

And a fetus lacks the sentience which makes humans so important, so killing it, while still undesirable, is less so than the loss of freedom which is the alternative. Thanks! I'm convinced again.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T20:40:04.712Z · LW · GW

I don't think you meant to write "against", I think you probably meant "for" or "in favor of".

Typo, thanks for spotting it.

Also, I'm not entirely sure that Less Wrong wants to be used as a forum for politics.

I posted this on LessWrong instead of anywhere else because you can be trusted to remain unbiased to the best of your ability. I had completely forgotten that part of the wiki though; it's been a while since I actively posted on LW. Thanks for the reminder.

Comment by Snowyowl on Blues, Greens and abortion · 2011-03-05T20:35:17.450Z · LW · GW

I naturally take a stance against abortion. It's easy to see why: a woman's freedom is much more important than another human's right to live

Fixed, thanks.

Comment by Snowyowl on Using the Karma system to call for a show of hands - profitable? · 2011-03-05T18:47:30.053Z · LW · GW

Good point. Since karma is gained by making constructive and insightful posts, any "exploit" that let one generate a lot of karma in a short time would either be quickly reversed or result in the "karma hoarder" becoming a very helpful member of the community. I think this post is more a warning that you may lose karma from making such polls, though since it's possible to gain or lose hundreds of points by making a post to the main page this seems irrelevant.

Comment by Snowyowl on Subjective Relativity, Time Dilation and Divergence · 2011-02-11T22:14:06.557Z · LW · GW

Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?

If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.

Comment by Snowyowl on How to make your intuitions cost-sensitive · 2011-02-10T16:18:09.478Z · LW · GW

and I think other mathematicians I've met are generally bad with numbers

Let me add another data point to your analysis: I'm a mathematician, and a visual thinker. I'm not particularly "good with numbers", in the sense that if someone says "1000 km" I have to translate that to "the size of France" before I can continue the conversation. Similarly with other units. So I think this technique might work well for me.

I do know my times tables though.

Comment by Snowyowl on Rationality Quotes: February 2011 · 2011-02-05T01:50:00.443Z · LW · GW

Weiner has a blog? My life is even more complete.

Comment by Snowyowl on Rationality Quotes: February 2011 · 2011-02-05T01:48:39.842Z · LW · GW

No, then too.

Comment by Snowyowl on Rationality Quotes: February 2011 · 2011-02-05T01:43:10.561Z · LW · GW

IIRC, he uses this joke several times.

Comment by Snowyowl on Why people reject science · 2011-02-04T22:30:14.052Z · LW · GW

And if you reject science, you conclude that scientists are out to get you. The boot fits; upvoted.

Comment by Snowyowl on On Charities and Linear Utility · 2011-02-04T22:11:49.900Z · LW · GW

Point taken.

Comment by Snowyowl on On Charities and Linear Utility · 2011-02-04T19:26:08.809Z · LW · GW

Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.

Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.

Comment by Snowyowl on Rationality Quotes: February 2011 · 2011-02-02T20:36:52.516Z · LW · GW

In Dirk Gently's universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won't.

Comment by Snowyowl on A sealed prediction · 2011-01-28T17:00:09.960Z · LW · GW

I made a prediction with sha1sum 0000000000000000000000000000000000000000. It's the prediction that sha1sum will be broken. I'll only reveal the exact formulation once I know whether it was true or false.

Comment by Snowyowl on Hindsight Devalues Science · 2011-01-27T17:22:50.064Z · LW · GW

Out of curiosity, which time was Yudkowsky actually telling the truth? When he said those five assertions were lies, or when he said the previous sentence was a lie? I don't want to make any guesses yet. This post broke my model; I need to get a new one before I come back.

Comment by Snowyowl on Omega can be replaced by amnesia · 2011-01-26T13:25:25.335Z · LW · GW

Sorry, my mistake. I misread the OP.

Comment by Snowyowl on Omega can be replaced by amnesia · 2011-01-26T13:24:26.216Z · LW · GW

I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).

Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.

EDIT: Assuming money is proportional to utility.

Comment by Snowyowl on Omega can be replaced by amnesia · 2011-01-26T13:15:13.404Z · LW · GW

The first time you enter the room, the boxes are both empty, so you can't ever get more than $1,000,000. But you're otherwise correct.

Comment by Snowyowl on Don't plan for the future · 2011-01-25T08:32:33.852Z · LW · GW

Er... yes. But I don't think it undermines my point that we are unlikely to be assimilated by aliens in the near future.

Comment by Snowyowl on Intrapersonal negotiation · 2011-01-24T14:21:07.413Z · LW · GW

This is a very interesting read. I have, on occasion, been similarly aware of my own subsystems. I didn't like it much; there was a strong impulse to reassert a single "self", and I wouldn't be able to function normally in that state. Moreover, some parts of my psyche belonged to several subsystems at once, which made it apparently impossible to avoid bias (at least for the side that wanted to avoid bias).

In case you're interested, the split took the form of a debate between my atheist leanings, my Christian upbringing, and my rationalist "judge". In decreasing order of how much they were controlled by emotion.

Comment by Snowyowl on Don't plan for the future · 2011-01-24T13:25:16.620Z · LW · GW

we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.

Let's be Bayesian about this.

Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.

It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.

I don't have enough information to give exact probabilities, but it's a lot more likely than you seem to think that we will survive the next billion years without assimilation from an alien UFAI.

Personally, I think the most likely scenario is either that Earth is somehow special and intelligent life is rarer than we give it credit for; or that alien UFAIs are generally not interested in interstellar/intergalactic travel.

EDIT: More rigorously, let Uf be the event "Alien UFAIs are a threat to us", and Ap be the event "We exist today" (anthropic principle). The prior probability P(Uf) is large, by your arguments, but P(Ap given Uf) is much smaller than P(Ap given not-Uf). Since we observe Ap to be true, the posterior probability P(Uf given Ap) is fairly small.