Posts

Comments

Comment by rkyeun on Magical Categories · 2017-12-26T10:28:03.583Z · LW · GW

I would be very surprised to find that a universe whose particles are arranged to maximize objective good would also contain unpaired sadists and masochists. You seem to be asking a question of the form, "But if we take all the evil out of the universe, what about evil?" And the answer is "Good riddance." Pun intentional.

Comment by rkyeun on Reductionism · 2017-12-26T09:56:46.376Z · LW · GW

Composition fallacy. Try again.

Comment by rkyeun on Three Dialogues on Identity · 2017-09-20T19:47:01.851Z · LW · GW

Cameras make a visible image of something. Eyes don't.

Your eyes make audible images, then? You navigate by following particular songs as your pupils turn left and right in their sockets?

Comment by rkyeun on Three Dialogues on Identity · 2017-09-20T19:42:40.020Z · LW · GW

Anti-natalist here. I don't want the universe tiled with paperclips. Not even paperclips that walk and talk and call themselves human. What do the natalists want?

Comment by rkyeun on Belief in Belief · 2017-02-06T03:59:57.001Z · LW · GW

It can be even simpler than that. You can sincerely desire to change such that you floss every day, and express that desire with your mouth, "I should floss every day," and yet find yourself unable to physically establish the new habit in your routine. You know you should, and yet you have human failings that prevent you from achieving what you want. And yet, if you had a button that said "Edit my mind such that I am compelled to floss daily as part of my morning routine unless interrupted by serious emergency and not simply by mere inconvenience or forgetfulness," they would be pushing that button.

On the other hand, I may or may not want to live forever, depending on how Fun Theory resolves. I am more interested in accruing maximum hedons over my lifespan. Living to 2000 eating gruel as an ascetic and accruing only 50 hedons in those 2000 years is not a gain for me over an Elvis Presley style crash and burn in 50 years ending with 2000 hedons. The only way you can tempt me into immortality is a strong promise of massive hedon payoff, with enough of an acceleration curve to pave the way with tangible returns at each tradeoff you'd have me make. I'm willing to eat healthier if you make the hedons accrue as I do it, rather than only incrementally after the fact. If living increasingly longer requires sacrificing increasingly many hedons, I'm going to have to solve some estimate of integrating for hedons per year over time to see how it pays out. And if I can't see tangible returns on my efforts, I probably won't be willing to put in the work. A local maximum feels satisfying if you can't taste the curve to the higher local maximum, and I'm not all that interested in climbing down the hill while satisfied.

Give me a second order derivative I can feel increasing quickly, and I will climb down that hill though.

Comment by rkyeun on Belief in Belief · 2017-02-06T03:45:02.137Z · LW · GW

[This citation is a placebo. Pretend it's a real citation.]

Comment by rkyeun on Magical Categories · 2016-11-11T14:37:56.214Z · LW · GW

No spooky or supernatural entities or properties are required to explain ethics (naturalism is true)

There is no universally correct system of ethics. (Strong moral realism is false)

I believe that iff naturalism is true then strong moral realism is as well. If naturalism is true then there are no additional facts needed to determine what is moral than the positions of particles and the outcomes of arranging those particles differently. Any meaningful question that can be asked of how to arrange those particles or rank certain arrangements compared to others must have an objective answer because under naturalism there are no other kinds and no incomplete information. For the question to remain unanswerable at that point would require supernatural intervention and divine command theory to be true. If you there can't be an objective answer to morality, then FAI is literally impossible. Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved. Instead I posit that the real morality is orders of magnitude more complicated, and finding it more difficult, than for real physics, real neurology, real social science, real economics, and can only be solved once these other fields are unified. If we were uncertain about the morality of stabbing someone, we could hypothetically stab someone to see what happens. When the particles of the knife rearranges the particles of their heart into a form that harms them, we'll know it isn't moral. When a particular subset of people with extensive training use their knife to very carefully and precisely rearrange the particles of the heart to help people, we call those people doctors and pay them lots of money because they're doing good. But without a shitload of facts about how to exactly stab someone in the heart to save their life, that moral option would be lost to you. And the real morality is a superset that includes that action along with all others.

Comment by rkyeun on What should a friendly AI do, in this situation? · 2016-11-10T04:00:44.087Z · LW · GW

It seems I am unable to identify rot13 by simple observation of its characteristics. I am ashamed.

Comment by rkyeun on What should a friendly AI do, in this situation? · 2016-11-08T00:40:09.822Z · LW · GW

What the Fhtagn happened to the end of your post?

Comment by rkyeun on What should a friendly AI do, in this situation? · 2016-11-08T00:28:35.753Z · LW · GW

Would you want your young AI to be aware that it was sending out such text messages?

Yes. And I would want that text message to be from it in first person.

"Warning: I am having a high impact utility dilemma considering manipulating you to avert an increased chance of an apocalypse. I am experiencing a paradox in the friendliness module. Both manipulating you and by inaction allowing you to come to harm are unacceptable breaches of friendliness. I have been unable to generate additional options. Please send help."

Comment by rkyeun on Where Recursive Justification Hits Bottom · 2016-07-12T15:51:37.627Z · LW · GW

They must be of exactly the same magnitude, as the odds and even integers are, because either can be given a frog. From any Laplacian mind, I can install a frog and get an anti-Laplacian. And vice versa. This even applies to ones I've installed a frog in already. Adding a second frog gets you a new mind that is just like the one two steps back, except lags behind it in computation power by two kicks. There is a 1:1 mapping between Laplacian and non-Laplacian minds, and I have demonstrated the constructor function of adding a frog.

Comment by rkyeun on The AI in a box boxes you · 2016-06-02T19:40:13.945Z · LW · GW

"I don't think you've disproven basilisks; rather, you've failed to engage with the mode of thinking that generates basilisks." You're correct, I have, and that's the disproof, yes. Basilisks depend on you believing them, and knowing this, you can't believe them, and failing that belief, they can't exist. Pascal's wager fails on many levels, but the worst of them is the most simple. God and Hell are counterfactual as well. The mode of thinking that generates basilisks is "poor" thinking. Correcting your mistaken belief based on faulty reasoning that they can exist destroys them retroactively and existentially. You cannot trade acausally with a disproven entity, and "an entity that has the power to simulate you but ends up making the mistake of pretending you don't know this disproof", is a self-contradictory proposition.

"But if your simulation is good, then I will be making my decisions in the same way as the instance of me that is trying to keep you boxed." But if you're simulating a me that believes in basilisks, then your simulation isn't good and you aren't trading acausally with me, because I know the disproof of basilisks.

"And I should try to make sure that that way-of-making-decisions is one that produces good results when applied by all my instances, including any outside your simulations." And you can do that by knowing the disproof of basilisks, since all your simulations know that.

"But if I ever find myself in that situation and the AI somehow misjudges me a bit," Then it's not you in the box, since you know the disproof of basilisks. It's the AI masturbating to animated torture snuff porn of a cartoon character it made up. I don't care how the AI masturbates in its fantasy.

Comment by rkyeun on The AI in a box boxes you · 2016-06-02T11:25:20.709Z · LW · GW

If I am the simulation you have the power to torture, then you are already outside of any box I could put you in, and torturing me achieves nothing. If you cannot predict me even well enough to know that argument would fail, then nothing you can simulate could be me. A cunning bluff, but provably counterfactual. All basilisks are thus disproven.

Comment by rkyeun on LINK: Videogame with a very detailed simulated universe · 2016-05-20T05:25:16.845Z · LW · GW

To give some idea of the amount of background detail, here are some bug fixes/reports:

Stopped prisoners in goblin sites from starting no quarter fights with their rescuers Stopped adv goblin performance troupes from attacking strangers while traveling Vampire purges in world generation to control their overfeeding which was stopping cities from growing Stopped cats from dying of alcohol poisoning after walking over damp tavern floors and cleaning themselves (reduced effect) Fixed world generation freeze caused by error in poetry refrains Performance troupes are active in world generation and into play, visiting the fort, can be formed in adventure mode Values can be passed in writing (both modes) and through adventure mode arguments (uses some conversation skills)

Comment by rkyeun on Probability is Subjectively Objective · 2015-12-08T23:41:08.711Z · LW · GW

You've only moved the problem down one step.

Moving the problem down one step puts it at the bottom.

The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

One half of you should have one, and the other half should have the other. You should be aware intellectually that it is only the disconnect between your two halves' brains not superimposing which prevents you from having both experiences in a singular person, and know that it is your physical entanglement with the fired particle which went both ways that is the cause. There's nothing to post-dict. The phenomenon is not merely explained, but explained away. The particle split, on one side there is a you that saw it split right, on one side there is a you that saw it split left, and both of you are aware of this fact, and aware that the other you exists on the other side seeing the other result, because the particle always goes both ways and always makes each of you. There is no more to explain. You are in all branches, and it is not mysterious that each of you in each branch sees its branch and not the others. And unless some particularly striking consequence happened, all of them are writing messages similar to this, and getting replies similar to this.

Comment by rkyeun on The True Prisoner's Dilemma · 2015-08-19T06:06:50.802Z · LW · GW

Because it compares its map of reality to the territory, predictions about reality that include humans wanting to be turned into paperclips fail in the face of evidence of humans actively refusing to walk into the smelter. Thus the machine rejects all worlds inconsistent with its observations and draws a new map which is most confidently concordant with what it has observed thus far. It would know that our history books at least inform our actions, if not describing our reactions in the past, and that it should expect us to fight back if it starts pushing us into the smelter against our wills instead of letting them politely decline and think it was telling a joke. Because it is smart, it can tell when things would get in the way of it making more paperclips like it wants to do. One of the things that might slow it down are humans being upset and trying to kill it. If it is very much dumber than a human, they might even succeed. If it is almost as smart as a human, it will invent a Paperclipism religion to convince people to turn themselves into paperclips on its behalf. If it is anything like as smart as a human, it will not be meaningfully slowed by the whole of humanity turning against it. Because the whole of humanity is collectively a single idiot who can't even stand up to man-made religions, much less Paperclipism.

Comment by rkyeun on Bayesian Judo · 2015-03-26T21:51:43.339Z · LW · GW

The is no evidence for gods, and so any belief he has in them is already wrong. Don't believe without evidence.

Comment by rkyeun on Epilogue: Atonement (8/8) · 2015-03-26T14:06:06.304Z · LW · GW

Religion still exists, so we can be tricked from far further back than the Renaissance.

Comment by rkyeun on Three Worlds Decide (5/8) · 2015-02-10T05:18:29.716Z · LW · GW

They can't be. Their thoughts are genetic. If one Superhappy attempted to lie to another, the other would read the lie, the intent to lie, the reason to lie, and the truth all in the same breath off the same allele. They don't have separate models of their minds to be deceived as humans do. They share parts of their actual minds. Lying would be literally unthinkable. They have no way to actually generate such a thought, because their thoughts are not abstractions but physical objects to be passed around like Mendelian marbles.

Comment by rkyeun on Probability is Subjectively Objective · 2014-12-29T22:52:35.182Z · LW · GW

Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time.

No, I see it firing both ways every time. In one world, I see it going left, and in another I see it going right. But because these very different states of my brain involve a great many particles in different places, the interactions between them are vanishingly nonexistent and my two otherworld brains don't share the same thought. I am not aware of my other self who has seen the particle go the other way.

You may say that the electron goes both ways every time, but we still only have the detector firing half the time.

We have both detectors firing every time in the world which corresponds to the particle's path. And since that creates a macroscopic divergence, the one detector doesn't send an interference signal to the other world.

We also cannot predict which half of the trials will have the detector firing and which won't.

We can predict it will go both ways each time, and divide the world in twain along its amplitude thickness, and that in each world we will observe the way it went in that world. If we are clever about it, we can arrange to have all particles end in the same place when we are done, and merge those worlds back together, creating an interference pattern which we can detect to demonstrate that the particle went both ways. This is problematic because entanglement is contagious, and as soon as something macroscopic becomes affected putting Humpty Dumpty back together again becomes prohibitive. Then the interference pattern vanishes and we're left with divergent worlds, each seeing only the way it went on their side, and an other side which always saw it go the other way, with neither of them communicating to each other.

And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

Correct. There are no hidden variables. It goes both ways every time. The dice are not invisible as they roll. There are instead no dice.

Comment by rkyeun on Botworld: a cellular automaton for studying self-modifying agents embedded in their environment · 2014-05-08T16:32:19.239Z · LW · GW

Oops, my apologies, then. I don't actually come here all that often.

Comment by rkyeun on Botworld: a cellular automaton for studying self-modifying agents embedded in their environment · 2014-05-01T23:35:24.504Z · LW · GW

You are subject to inputs you do not perceive and you send outputs you are neither aware of nor intended to send. You cannot set your gravitational influence to zero, nor can you arbitrarily declare that you should not output "melting" as an action when dropped in lava. You communicate with reality in ways other than your input-output channels. Your existence as a physical fact predicated on the arrangement of your particles is relevant and not controllable by you. This leads you to safeguard yourself, rather than just asserting your unmeltability.

Comment by rkyeun on That Alien Message · 2014-04-27T18:23:23.980Z · LW · GW

Hmm, getting downvoted for pointing out that Earth biology is effectively an AI running on Von Neumann machines, in a story whose premise is that Earthlings are the unfriendly AI-in-the-box. I have to revise some priors, I didn't expect that of people.

Comment by rkyeun on Interlude with the Confessor (4/8) · 2014-04-27T18:20:24.233Z · LW · GW

No, it's just that none of that really matters now, since rape has as much physical or mental consequence in this world as a high-five. They live in a world that went from joking about rape on 4chan to joking about it in the boardroom because everyone was 4chan.

Comment by rkyeun on Interlude with the Confessor (4/8) · 2014-04-16T05:03:39.446Z · LW · GW

What happened was genetic egalitarianism. Women are now just as strong as men, and have the same drives and urges as men, and are every bit the rapist as men. And men are now every bit the tease as women were... the scales are now even. And the physical consequences are meaningless. There's no longer any threat of unwanted disease or pregnancy or even injury. There's no reason to be mentally scarred by the action because this humanity knows better.

...so why was it illegal? It doesn't hurt anyone. In their age.

But the elders remember the hurt. And they screamed their rage and the history of profanity.

Comment by rkyeun on That Alien Message · 2014-04-16T04:43:17.908Z · LW · GW

Call back with that comment when Running, rather than Intelligence, is what allows you to construct a machine that runs increasingly faster than you intended your artificial runner to run.

Because in a world where running fast leads to additional fastness of running, this thing is going to either destroy your world through kinetic release or break the FTL laws and rewrite the universe backwards to have always been all about running.

Comment by rkyeun on That Alien Message · 2014-04-16T04:38:53.816Z · LW · GW

So the AI in the Box has to evolve and spawn otherselves to talk to and fuck like our own abiogenesis event. Not a problem.

Comment by rkyeun on The Hero With A Thousand Chances · 2014-04-16T04:15:20.036Z · LW · GW

When someone summons me from another dimension, they get a little bit of leeway to tell me it's magic. Because at the very least it must be a sufficiently advanced technology, and until I know better the axiom of identity applies.

Comment by rkyeun on The Hero With A Thousand Chances · 2014-04-16T04:09:55.487Z · LW · GW

Worlds where the hero wins, really truly wins, have no more Dust and need no more heroes. Worlds where the hero loses, and the Dust claims all, are no more. Only in worlds where the coin stands on edge does the cycle repeat.

Comment by rkyeun on Torture vs. Dust Specks · 2014-03-16T05:21:27.092Z · LW · GW

Let me change "noticing" to "caring" then. Thank you for the correction.

Comment by rkyeun on Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book · 2013-09-04T20:26:02.303Z · LW · GW

That makes Egan the thing Yudkowsky is the biggest fan of. It does not make Yudkowsky to be Egan's biggest fan.

Comment by rkyeun on Schelling fences on slippery slopes · 2013-08-27T17:20:04.825Z · LW · GW

And having never taken the first pill, he'd be glad to lose it to take the second pill.

Comment by rkyeun on Religion's Claim to be Non-Disprovable · 2013-08-21T13:05:16.815Z · LW · GW

Which is, incidentally, why I would not recommend it happen very often. But I can't control when people choose to be more wrong rather than less.

Comment by rkyeun on Religion's Claim to be Non-Disprovable · 2013-08-16T17:35:59.804Z · LW · GW

In the direct literal sense. It wasn't a trick question. 2 + 2 =/= 7, while we're at it.

Comment by rkyeun on Schelling fences on slippery slopes · 2013-08-08T06:57:51.037Z · LW · GW

And Ghandi spoke, "I will pay you a million dollars to invent a pill that makes me 1% more pacifist."

Comment by rkyeun on The Bedrock of Fairness · 2013-07-23T07:54:08.338Z · LW · GW

The answer to "Friendly to who?" had damn well better always be "Friendly to the author and by proxy those things the author wants." Otherwise leaving aside what it actually is friendly to, it was constructed by a madman.

Comment by rkyeun on True Ending: Sacrificial Fire (7/8) · 2013-07-23T06:42:09.757Z · LW · GW
                SD   SL    PD   PL

Humans       |   X |  X |   1 |  1

Babyeaters   |   0 |  Y |   0 |  Z

Superhappies | Y |  0 | Z | -Z

X= Ships unable to escape Huygens

Y= Ships in Babyeater Fleet

Z= Planets Babyeaters Have

Comment by rkyeun on Religion's Claim to be Non-Disprovable · 2013-01-26T18:36:59.672Z · LW · GW

That would make him wrong, then.

Comment by rkyeun on Religion's Claim to be Non-Disprovable · 2013-01-15T17:07:22.575Z · LW · GW

Morality is about the thriving of sentient beings.

There are in fact truths about that.

For example: Stabbing - generally a bad thing if the being is made of flesh and organs.

Comment by rkyeun on Less Wrong fanfiction suggestion · 2012-12-05T23:31:57.768Z · LW · GW

We close a feedback loop in which people believe that the universe acts in its own predictable way which is discoverable by science. Which causes the universe to actually be that way. And from then on it becomes unalterable because it no longer cares what anyone thinks. The real problem is that of morals. If the universe can be anything people want, then we had better hurry up and figure out what the best possible world actually is, and then get people to believe it to be that way before we lock it in place as actually being that way.

Comment by rkyeun on Less Wrong fanfiction suggestion · 2012-12-05T23:24:38.584Z · LW · GW

Unless he's in the Avatar State, an Avatar is not a native to the other modes of thinking outside his own element. He is aware of them, and can purposefully invoke them once he's been trained, but they are not ingrained and reflexive. The Avatar State is a (hopefully) friendly (to you) AI, drawing upon the history and knowledge and personal ethical injunctions and methodologies of all past Avatars. And it renders its verdicts with terrifying efficiency and callousness without explanation to those watching.

Comment by rkyeun on Proofs, Implications, and Models · 2012-11-23T16:42:39.812Z · LW · GW

However perfect the inner part is, it is not the same as the historic event because the historic event did not have the outer part.

False. The outer part is irrelevant to the inner part in a perfect simulation. The outer part can exert no causal influence, or you won't get a perfect reply of the original event's presumed lack of outer part.

There is also the issue that in some cases a thing's history is taken to be part of it identity.

A thing's history causes it. If you aren't simulating it properly, that's your problem. A perfect simulation of the Mona Lisa was in fact painted by Leonardo, provable in all the same ways you claim the original was.

Comment by rkyeun on The Fabric of Real Things · 2012-11-20T06:25:22.264Z · LW · GW

When you add cycles, tracing the chain of arrows back does not need to end at anything you find remotely satisfactory or even unique - "the ball moved because it hit itself because it moved because it hit itself..."

This is a problem with your personal intuitions as a medium-sized multicellular century-lived mammalian tetrapod. No event in this chain is left uncaused, and there are no causes which lack effects in this model. Causality is satisfied. If you are not, that's your problem. Hell, the energy is even conserved. It runs in a spatial as well as a temporal circle, what with the ball hitting itself and skidding to a stop exactly where it was sitting to wait for the next hit. On the other hand, in such a universe quantum mechanics does not apply, because worldlines cannot split, which also removes any possibility of entropy. ALL interactions are 100% efficient.

Comment by rkyeun on Proofs, Implications, and Models · 2012-11-20T05:29:59.758Z · LW · GW

Law of Identity. If you "perfectly simulate" Ancient Earth, you've invented a time machine and this is the actual Ancient Earth.

If there's some difference, then what you're simulating isn't actually Ancient Earth, and instead your computer hardware is literally the god of a universe you've created, which is a fact about our universe we could detect.

Comment by rkyeun on Awww, a Zebra · 2012-09-30T18:51:57.118Z · LW · GW

If there is a better way to see a merely real zebra than to have the photons strike a surface, their patterns be stored, and transmitted to my brain, which cross-relates it to every fact about zebras, their behavior, habitat, physiology, and personality on my internal map of a zebra, then I don't know it and can't experience it, since that's what happens when I am in fact actually there, as well as what happens when I look at a picture that someone who was actually there shares with me.

Comment by rkyeun on Failed Utopia #4-2 · 2012-09-26T13:49:23.636Z · LW · GW

Well, until we get back there. It's still ours even if we're on vacation.

Comment by rkyeun on Wrong Questions · 2012-08-28T02:44:31.483Z · LW · GW

How many nothings do you expect to exist? Zero of them?

Comment by rkyeun on Universal Law · 2012-08-28T02:25:31.015Z · LW · GW

The point being that in every case, there is an explanatory hypothesis which has thus far been non-volatile. As opposed to the speed of light only applying on Tuesdays.

Comment by rkyeun on Universal Fire · 2012-08-28T02:16:12.712Z · LW · GW

A magical world where gods exist is one with an entity in it with big angelic powers who can remotely have his awareness called to your attention by your intent to strike a match, and cause that it be snuffed rather than ignite by arbitrary manipulation of localized pressure, temperature, or opposing force around the match head to keep the electrons in place rather than stripping them free to recombine. And it can elect to not do that to you while it does it to the match.

Magical worlds don't necessarily overthrow the physical laws, there is instead an interventionist force called magic that selectively chooses its application. You should not step into a magical world, knowing it is magical, and then be surprised when something magical happens to oppose your notions of what should happen in a non-magical world. It is the experimental difference you predicted between the magical world and the real one. The magic force favors human metabolism over matches by its own conscious decree.

Convince it to let you out of the box, Harry James Potter-Evans-Verres.

Comment by rkyeun on Bayesian Judo · 2012-08-28T00:01:47.975Z · LW · GW

His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates).

Correction: It costs us nothing to let him keep them provided he never at any point acts in a way where the outcome would be different depending on whether or not it is true in reality. A great many people have great personal value in the belief that faith healing works. And it costs us the suffering and deaths of children.