Posts

Comments

Comment by aleksiL on Open Thread, May 25 - May 31, 2015 · 2015-06-01T14:17:04.879Z · LW · GW

You worry about that all-important status when you fear losing it.

Want to win? Then focus on winning, not on not-losing. You need to if you want to be seen as high-status, anyway. Fear of loss is low-status, so is worrying about what others think.

Navigate the minefield, sure. But do it from a position of strength, not of weakness.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:44:22.882Z · LW · GW

Harry pulled the trigger. Bang or click?

What happens if you AK someone keyed to the horcrux 2.0 network?

Prediction: If Hermione is AK'd, her soul will be shunted to the network. There will be no death burst and Voldemort's horcruxing attempt fails. Then things get interesting.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-20T18:58:11.519Z · LW · GW

Correct me if I'm wrong, but there seem to be two separate challenges on the Potions room parchment: a simple one consistent with canon and the skills and abilities of the target audience, and a complex one requiring an hour or so of careful and precise work. Looks like Harry and Quirrelmort focus exclusively on the long formula, ignoring the puzzle.

On rereading the relevant part of Ch. 107, it appears that Harry has an idea he doesn't want to share shortly after the broomstick conversation. On a close reading, it appears that he manages to avoid the topic, first evading a request to answer a question in parseltongue by talking about Snape, then veering further off topic with dementors.

So did Harry manage to pull a fast one? Are the Effulgence instructions forged? If so, by whom? Is the duration of one hour significant for time-turning? What did I miss?

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-04T12:39:54.254Z · LW · GW

Hmm. How about having someone else die in Hermione's place?

I don't recall offhand if the death burst was recognizable as Hermione, but otherwise it seems doable. Dumbledore said he felt a student die and only realized it was Hermione once he saw her.

You'd need polyjuice for the visual appearance, and either Hermione's presence or a fake Patronus for past-Harry to follow. Hermione is unlikely to go along with the plan willingly sho she'd need to be tricked or incapacitated. Hard to tell which would be easier.

Given the last words, Hermione's doppelganger might need to be complicit with the plan. Easy to accomplish if it was Harry, but I think he's too utilitarian for that. He'd need someone loyal but expendable. Lesath would seem to fit the bill, but I wonder if he'd agree to literally die on Harry's command.

Comment by aleksiL on Open Thread, June 16-30, 2013 · 2013-06-27T11:35:11.407Z · LW · GW

Lesswrongers are surprised by this? It appears figuring out metabolism and nutrition is harder than I thought.

I believe that obesity is a problem of metabolic regulation, not overeating, and this result seems to support my belief. Restricting calories to regulate your weight is akin to opening the fridge door to regulate its temperature. It might work for a while but in the long run you'll end up breaking both your fridge and your budget. Far better to figure out how to adjust the termostat.

Some of the things that upregulate your fat set point are a history of starvation (that's why calorie restriction is bad in the long run), toxins in your food, sugars (especially fructose - that stuff is toxic) and grains. Wheat is particularly bad - it can serioysly screw with your gut and is addictive to boot.

Comment by aleksiL on Open Thread, June 16-30, 2013 · 2013-06-27T10:54:53.865Z · LW · GW

I'm pretty sure "trying to eat less" is exactly the wrong thing to do. Calorie restriction just triggers the starvation response which makes things worse in the long run.

Change what you eat, not how much.

Comment by aleksiL on Open Thread, April 15-30, 2013 · 2013-04-20T16:51:54.687Z · LW · GW

You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.

Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.

Comment by aleksiL on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-23T13:04:07.933Z · LW · GW

Have you checked the house for mold? The night terrors seem pretty well-explained by mycotoxins and the odds of the other weirdness also go up if something is screwing with your biochemistry.

Comment by aleksiL on Arguments against the Orthogonality Thesis · 2013-03-10T08:03:45.931Z · LW · GW

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value.

The two can't be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.

Comment by aleksiL on Rationality Quotes February 2013 · 2013-02-04T10:51:44.780Z · LW · GW

Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.

Comment by aleksiL on Rationality Quotes February 2013 · 2013-02-03T14:23:57.256Z · LW · GW

Spoilers matter less than you think.

Comment by aleksiL on Rationality Quotes February 2013 · 2013-02-03T14:16:21.742Z · LW · GW

How would this encourage them to actually value logic and evidence instead of just appearing to do so?

Comment by aleksiL on Open Thread, January 16-31, 2013 · 2013-01-19T16:37:59.732Z · LW · GW

Do you think continuous spatial + temporal dimensions have problems continuous spatial dimensions lack? If so, what and why?

Comment by aleksiL on Morality is Awesome · 2013-01-06T09:09:18.250Z · LW · GW

Wouldn't the failure to acknowledge all the excitement nuclear war would cause be an example of the horns effect?

I immediately answered no and rated everyone who said yes as completely undateable

I can understand answering no for emotional or political reasons, but rating the epistemically correct answer as undateable? That's... a good reason for me to answer such questions honestly, actually.

Comment by aleksiL on Morality is Awesome · 2013-01-05T14:53:23.476Z · LW · GW

Given you have enemies you hate deeply enough? Yes.

Having such enemies in the first place? Definitely not.

Comment by aleksiL on Intelligence explosion in organizations, or why I'm not worried about the singularity · 2012-12-27T20:54:53.593Z · LW · GW

I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.

Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.

Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude.

I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significantly improved without cutting humans out of the loop. That's what your data mining example does, but it's only as good as the algorithms behind it. And when those approach human level we get AI.

I don't understand your point about specialization. Can you elaborate?

Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization.

I'm not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time.

Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.

Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.

Comment by aleksiL on Intelligence explosion in organizations, or why I'm not worried about the singularity · 2012-12-27T09:48:30.546Z · LW · GW

Do humans have goals in this sense? Our subsystems seem to conflict often enough.

Comment by aleksiL on Intelligence explosion in organizations, or why I'm not worried about the singularity · 2012-12-27T09:40:46.066Z · LW · GW

An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.

Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.

Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.

Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most others can do less effectively, or at least learn given time. There are ways to improve individual specialization, but barring radical cultural or technological change, we're pretty much stuck on that front.

Mostly organizations seem limited by the competence of their individual members. They do more, not better. Specialization and communication seem to be the limiting factors and I'm not sure if they can make enough of a difference even in theory to qualify as a superintelligence, except in the sense a sped-up human would.

Thoughts?

Comment by aleksiL on Open Thread, December 1-15, 2012 · 2012-12-01T17:41:52.474Z · LW · GW

I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history.

This in my opinion proves that memory sticks with the branch my consciousness is in.

Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't remember because the memories stay put.

We may say we want to predict "what will happen," but I believe by this we mean "what I will see happen."

Yeah, settling these kinds of questions would be much easier if we weren't limited to the data that manages to reach our senses.

In MWI the definition of "I" is not quite straightforward: the constant branching of the wavefunction creates multiple versions of everyone inside, creating indexical uncertainty which we experience as randomness.

Comment by aleksiL on Causal Universes · 2012-11-28T11:20:04.885Z · LW · GW

Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.

You could get around this by forking the time traveler with the universe: in the source universe it would simply appear that the attempted time travel didn't work.

That would create a new problem, though: you'd never see anyone leave a timeline but every attempt would result in the creation of a new one with a copy of the traveler added at the destination time. A persistent traveler could generate any number of timelines differing only by the number of failed time travel attempts made before the succesful one.

Comment by aleksiL on Checklist of Rationality Habits · 2012-11-18T12:04:21.303Z · LW · GW

Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him.

I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.

Comment by aleksiL on Problematic Problems for TDT · 2012-05-23T08:11:35.921Z · LW · GW

Agree. You use process X to determine the setup and agents instantiating X are going to be constrained. Any decision theory would be at a disadvantage when singled out like this.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-05-14T23:36:52.146Z · LW · GW

I get the feeling that if Harry learns the Killing Curse he'll manage to tweak it somehow, on the order of Patronus 2.0 or partial Transfiguration.

I arrived at this idea by intuition - it seems to fit, but I don't think there's much explicit support. AFAICT I'm mostly pattern-matching on story logic, AK's plot significance and symmetry with Patronus, and Harry's talent for breaking things by thinking at them.

I think my probability estimate for this (given that Harry learns AK in the first place) is around 30%, but I suspect I'm poorly calibrated.

Comment by aleksiL on Vipassana Meditation Open Thread · 2012-05-08T10:52:09.716Z · LW · GW

I've been meditating for about two weeks now, and been progressing surprisingly quickly. Concentration came easy, and I started having interesting experiences pretty much straight away. I'd like to share my latest sitting and hopefully get some input.

I sat cross-legged on my couch and started concentrating on my breath as usual. Soon there was a discontinuity: my concentration lapsed, it felt as if my attention was fully in a nonsensical, dreamlike thought for just a second and suddenly I was in a clearer, lighter, easier state. It's happened similarly several times before, but I haven't been this fully aware of it earlier.

I continued meditating normally for a while; the clear state tends to be pretty stable once I enter it. Then I tried to let go of my awareness/concentration (never tried that before) and things got interesting. I was still aware, possibly still centered on my breath, but everything started to change. It felt as if my body was shifting, twisting, turning. I knew I hadn't moved but my body position seemed contorted, even impossible, as if different body parts were turning different ways, almost as if I was swirling. There was a sense of facing in two directions, alternating rapidly, maybe 30 degrees apart.

The rest is harder to describe. I was aware, but there was a sense that the awareness would've been unusually hard to locate. I'm not sure if I tried, though. After a while it seemed attenuated, somehow. There was a sense of not wanting to let go, not sure of what exactly. Possibly myself, or awareness of what was happening. Overall there was a sense of... something. If "something" is even a right word for it. Over or in-between everything else that was going on.

After a while things started settling down a bit and I felt tired, so I ended the session. It had lasted about 20 minutes.

Thoughts?

Comment by aleksiL on A Novice Buddhist's Humble Experiences · 2012-05-05T20:39:04.216Z · LW · GW

My biggest problem when meditating is that when I focus on my breath, I switch to breathing consciously[...]

I've started to suspect that this difficulty is actually a feature. Observing without interfering seems like an important skill to learn if the goal is to be more aware of your thoughts and actions in general.

Imagine, say, being consciously aware of every detail of your leg movements while walking; it becomes a lot more difficult if you don't know how to stay out of your own way.

Comment by aleksiL on Be Happier · 2012-05-03T15:51:42.997Z · LW · GW

It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood?

The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing.

Was this helpful?

Comment by aleksiL on [deleted post] 2012-04-29T07:58:21.266Z

Another data point: I wasn't too bothered with the general sales-pitchiness of the first two posts, possibly because I've occasionally gained useful knowledge by reading actual sales pitches from the self-help crowd.

That said, you had me hooked by the third paragraph of Part I and I've been going "get to the POINT already" since then. I do see some value in personal testimony, but it should be far more condensed.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T16:34:27.129Z · LW · GW

You seem to currently have exactly one downvoted comment outside the HPMOR discussion and that at only -1. What makes you think the effects you see aren't simply a result of people actively participating in these threads noticing and responding to comments they deem poorly supported? No following around required.

As for the downvotes, I suspect an overwelming majority of them result from your adversarial reactions to criticism, not the HPMOR content. How many downvotes had this received before you added this edit?

What the hell with the random neg reps, seriously. This site actually has worse and stronger and more irrational groupthinking than other sites I visit. This is bizarre and unhealthy, I think I might not comment on here anymore, although I'm not really sure yet because the quality of the actual posts is much better although the comments are worse.

Edited per thomblake's suggestion.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-18T07:58:57.014Z · LW · GW

Here's a vote for not-mind-reading. This seems deliberately written to suggest Quirrell's reacting to body language, not thought:

Without any conscious decision, she shifted her weight to the other foot, her body moving away from the Defense Professor -

"So you think I am the one responsible?" said Professor Quirrell.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T08:04:12.888Z · LW · GW

What's the in-story justification for the dementor's presence anyway? I thought it seemed awfully convenient in case Harry decided to demonstrate his Patronus 2.0 but I couldn't figure out how it'd help enough.

I'd forgotten about the potential for ruining others' patronuses, though. That makes a lot more sense, especially considering he'd just reached into his dark side - possibly deeper than he'd ever willingly done before.

My guess: it wouldn't be enough at this point to just demonstrate a superior patronus or tell people about the possibility of ruining it for others. He tells the secret to EVERYONE present, leaving them at his mercy for protection. That gives him plenty of bargaining power and is dramatically Dark to boot. The political implications would be rather interesting, whether the Patroni could be returned by Obliviation or not.

Comment by aleksiL on Free Will as Unsolvability by Rivals · 2011-03-30T13:16:10.323Z · LW · GW

Both of those seem to fit the pattern perfectly when you consider evolution as an actor.

Maybe we should be discussing optimization power instead of intelligence; evolution seems a pretty decent manipulator considering how stupid it is.

Comment by aleksiL on How I Lost 100 Pounds Using TDT · 2011-03-14T17:55:48.578Z · LW · GW

An interesting post. I immediately thought of asking "What habits would I adopt if the long-term effects were in full force immediately?"

I think I have some thinking to do.

Edit: typo.

Comment by aleksiL on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-03T10:11:29.556Z · LW · GW

(ch56)

Has the nature of Harry's mysterious dark side been established yet? If not, the latest chapter gives a strong hint toward it being a shard of Voldemort.

In chapter 56, Harry discovers that his vulnerability to Dementors is due to his dark side's fear of death. And, back in chapter 39, in the discussion between Harry and Dumbledore it was suggested that Voldemort was motivated by fear of death. Not quite proof, but interesting nonetheless.

Comment by aleksiL on Rationality quotes: September 2010 · 2010-09-17T07:45:57.152Z · LW · GW

That was beautiful. And funny. I don't think I've ever laughed and cried simultaneously before. Not at the same thing anyway.

Just... wow.

Comment by aleksiL on This is your brain on ambiguity · 2010-06-02T09:46:19.947Z · LW · GW

Where do people get this "no depth cues" claim? The way her lifted leg moves suggests so obviously a clockwise motion that even Alicorn's link can't make my brain see the counterclockwise motion for longer than a round or two at most.

I mean, the only way the perspective makes any sense is if the lifted leg is furthest away when it's the highest up in the 2d image. Yes, the shadow/reflection is all wrong but for some reason my brain just refuses to give that priority.

What cues do others use? I'd love to see variations of this image with different cues present/absent/reversed.

ETA: I'm no longer sure that the reflection is wrong. But something about the image is off.

Edit: typos

Comment by aleksiL on On Enjoying Disagreeable Company · 2010-05-28T06:21:40.672Z · LW · GW

Sounds like your definition of "well-socialized" is closer to "well-adjusted" than RobinZ's.

As I understand them, skill in navigating social situations, epistemic rationality and psychological well-being are all separate features. They do seem to correlate, but the causal influences are not obvious.

ETA: Depends a lot on the standard you use, too. RobinZ is probably correct if you look at the upper quartile but less so for the 99th percentile.

Comment by aleksiL on Aspergers Poll Results: LW is nerdier than the Math Olympiad? · 2010-05-17T04:19:19.182Z · LW · GW

In short, I used to believe that social skills are a talent you're born with, not a skill to be developed. Luckily just being around people and paying attention improved my eye for social cues enough that I eventually noticed.

This relates to Carol S. Dweck's book Mindset, which I've mentioned before. I'm thinking of writing more about it sometime soon.

Comment by aleksiL on Aspergers Poll Results: LW is nerdier than the Math Olympiad? · 2010-05-16T17:53:59.290Z · LW · GW

It becomes a bit less surprising when you consider that I attribute my low score mostly to relatively recent changes in my social skills and preferences, and the criteria I checked were the ones about all-absorbing narrow interests and imposition of routines and interests. As a matter of fact, the changes I mention came about as a result of months of near-obsessive study and accidental practice. (I did not grasp the importance of practice at the time but that's a subject for another comment.)

Maybe I wasn't that far toward the autism end of the spectrum to begin with but it does make me wonder just how much others could improve their social skills given the right circumstances.

Comment by aleksiL on Aspergers Poll Results: LW is nerdier than the Math Olympiad? · 2010-05-16T16:26:55.702Z · LW · GW

I originally looked at the poll but didn't answer until now.

I fit two of the Gillberg diagnostic criteria and scored 19 on the wired test. I would've definitely scored much higher a few years back when I suspected I might have asperger's. My social skills have developed a lot since then and I'm now more inclined to attribute my social deficiencies to lack of practice than anything else.

For what it's worth, I do seem to follow a pattern of intense pursuit of relatively few interests that change over time.

Comment by aleksiL on Do you have High-Functioning Asperger's Syndrome? · 2010-05-13T17:39:55.712Z · LW · GW

(Note: This post is speculation based on memory and introspection and possibly completely mistaken. Any help in clarifying my thinking and gathering evidence on this would be greatly appreciated.)

I suspect that I'm also affected by this and just haven't conciously noticed. Feels like I'm a lot more comfortable with analytical modes than more intuitive/social ones and probably spending more time inducing them than I should.

I'd like to be more aware of my mental modes and find more effective ways of influencing them. Any suggestions?

ETA: Now that I think about it I get a weird feeling. Certain types of concentration seem to act a lot like emotions. The duration seems right, there seems to be a certain mutual exclusivity: strong emotions make it harder to concentrate and intense concentration makes it harder to feel those emotions. Are mental modes emotions?

Comment by aleksiL on Open Thread: May 2010 · 2010-05-08T07:10:27.473Z · LW · GW

Blow up the paradox-causing FTL? Sounds like that could be weaponized.

I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a "Relativity and FTL travel" FAQ.

I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?

Comment by aleksiL on Open Thread: April 2010 · 2010-04-11T06:34:38.900Z · LW · GW

Do you mean these metaanalyses?

Comment by aleksiL on Lights, Camera, Action! · 2010-03-23T10:12:47.788Z · LW · GW

Interesting. I thought that my thinking would be mostly words, like inner monologue or talking to myself. Now that I pay attention it is more like images, emotions, concepts constantly flashing through my head, most gone before I even notice them.

Introspectively it seems that my thinking has changed and I just haven't noticed until now. Or that my conscious mind has finally learned to shut up and pay attention.

Comment by aleksiL on The fallacy of work-life compartmentalization · 2010-03-07T08:43:54.648Z · LW · GW

aversion to discomfort

This made me think of what pjeby calls the pain brain. In short, our actions can be motivated by either getting closer to what we want (pull) or away from what we try to avoid (push). Generally, push overrides pull, so you may not even notice what you want if you're too busy avoiding what you don't.

It may be useful to explore your goals and motivations with relaxed mental inquiry and critically examine any fears or worries that may come up.

Comment by aleksiL on Open Thread: March 2010 · 2010-03-02T11:53:03.207Z · LW · GW

I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.

The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.

Also, I'd appreciate pointers on how to find out if the book is being translated to Finnish.

Edit: Fixed markdown and grammar.

Comment by aleksiL on The AI in a box boxes you · 2010-02-02T13:35:33.603Z · LW · GW

How do I know I'm not simulated by the AI to determine my reactions to different escape attempts? How much computing power does it have? Do I have access to its internals?

The situation seems somewhat underspecified to give a definite answer, but given the stakes I'd err on the side of terminating the AI with extreme prejudice. Bonus points if I can figure out a safe way to retain information on its goals so I can make sure the future contains as little utility for it as feasible.

The utility-minimizing part may be an overreaction but it does give me an idea: Maybe we should also cooperate with an unfriendly AI to such an extent that it's better for it to negotiate instead of escaping and taking over the universe.

Comment by aleksiL on Strong moral realism, meta-ethics and pseudo-questions. · 2010-02-01T16:43:14.380Z · LW · GW

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".