Comment by Sebastian_Hagen2 on True Ending: Sacrificial Fire (7/8) · 2009-02-06T13:34:00.000Z · LW · GW

It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I'm surprised nobody seems to have mentioned this yet - am I missing something obvious?

Comment by Sebastian_Hagen2 on Three Worlds Collide (0/8) · 2009-01-30T15:15:52.000Z · LW · GW

List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive Giant Science Vessel - GSV - General Systems Vehicle Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep? Greater Archive - allusion to Orion's Arm's Greater Archives?

Comment by Sebastian_Hagen2 on BHTV: Yudkowsky / Wilkinson · 2009-01-26T14:23:07.000Z · LW · GW

Will Wilkinson said at 50:48:

People will shout at you in germany if you jaywalk, I'm told.
I can't say for sure this doesn't happen anywhere in Germany, but it's definitely not a universal in German society. Where I live, jaywalking is pretty common and nobody shouts at people for doing it unless they force a driver to brake or swerve by doing so.

Comment by Sebastian_Hagen2 on Investing for the Long Slump · 2009-01-25T12:30:57.000Z · LW · GW

I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.
If this is indeed the reason for Eliezer considering this specific outcome, that would suggest that deliberately depressing the economy is a valid Existential Risk-prevention tactic.

Comment by Sebastian_Hagen2 on Failed Utopia #4-2 · 2009-01-21T22:27:48.000Z · LW · GW

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.

Also, at the risk of being redundant: Great story.

Comment by Sebastian_Hagen2 on The First World Takeover · 2008-11-19T17:04:19.000Z · LW · GW

To add to Abigail's point: Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova. I don't see any reasonable way of even assigning a lower bound to f_i.

Comment by Sebastian_Hagen2 on Which Parts Are "Me"? · 2008-10-22T20:21:46.000Z · LW · GW

The of helping someone, ...
Missing word?

Comment by Sebastian_Hagen2 on Shut up and do the impossible! · 2008-10-09T19:54:00.000Z · LW · GW

Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.
Does this assume that they would be protected from any consequences of messing the Friendliness up and building a UFAI by accident? I don't see a good solution to this. If people are protected from being eaten by their creations, they can slog through the problem using a trial-and-error approach through however many iterations it takes. If they aren't, this is going to be one deadly test.

Comment by Sebastian_Hagen2 on The Level Above Mine · 2008-09-26T13:14:54.000Z · LW · GW

Up to now there never seemed to be a reason to say this, but now that there is:

Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.

Comment by Sebastian_Hagen2 on A Prodigy of Refutation · 2008-09-18T12:36:24.000Z · LW · GW

It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;
Did you mean s/then/than/?

You posted your raw email address needlessly. Yum.
Posting it here didn't really change anything.

How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.
You can tell by the effect they have on their environment. If it's stupid, but it works, it's not stupid. This can be hard to do precisely if you don't know the entity's precise goals, but in general if they manage to do interesting things you couldn't (e.g. making large amounts of money, writing highly useful software, obtaining a cult of followers or converting planets into computronium), they're probably doing something right.

In the case of you considering taking action against the entity (as in your example of deleting the AI), this is partly self-regulating: A sufficiently intelligent entity should see such an attack coming and have effective countermeasures in place (for instance, by communicating better to you so you don't conclude it has gone mad). If you attack it and succeed, that by itself places limits on how intelligent the target really was. Note that this part doesn't work if both sides are unmodified humans, because the relative differences in intelligence aren't large enough.

Comment by Sebastian_Hagen2 on The Truly Iterated Prisoner's Dilemma · 2008-09-04T19:35:14.000Z · LW · GW

Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?
No. That seems obviously wrong, even if I can't figure out where the error lies.
We only get a reversion to the (D,D) case if we know with a high degree of confidence that the other party doesn't use naive Tit for Tat, and they know that we don't. That seems like an iffy assumption to me. If we knew the exact algorithm the other side uses, it would be trivial to find a winning strategy; so how do we know it isn't naive Tit for Tat? If there's a sufficiently high chance the other side is using naive Tit for Tat, it might well be optimal to repeat their choices until the second-to-last round.

Comment by Sebastian_Hagen2 on The True Prisoner's Dilemma · 2008-09-04T00:34:52.000Z · LW · GW

Definitely defect. Cooperation only makes sense in the iterated version of the PD. This isn't the iterated case, and there's no prior communication, hence no chance to negotiate for mutual cooperation (though even if there was, meaningful negotiation may well be impossible depending on specific details of the situation). Superrationality be damned, humanity's choice doesn't have any causal influence on the paperclip maximizer's choice. Defection is the right move.

Comment by Sebastian_Hagen2 on Unnatural Categories · 2008-08-24T11:35:28.000Z · LW · GW

Nitpicking your poison category:

What is a poison? ... Carrots, water, and oxygen are "not poison". ... (... You're really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)
If I understand that last definition correctly, it should classify water as a poison.

Comment by Sebastian_Hagen2 on The Cartoon Guide to Löb's Theorem · 2008-08-18T15:40:40.000Z · LW · GW

Doug S.:

What character is ◻?

Eliezer Yudkowsky:

Larry, interpret the smiley face as saying:

PA + (◻C -> C) |- I'm still struggling to completely understand this. Are you also changing the meaning of ◻ from 'derivable from PA' to 'derivable from PA + (◻C -> C)'? If so, are you additionally changing L to use provability in PA + (◻C -> C) instead of provability in PA?

Comment by Sebastian_Hagen2 on Moral Error and Moral Disagreement · 2008-08-11T15:06:36.000Z · LW · GW

Quick correction: s/abstract rational reasoning/abstract moral reasoning/

Comment by Sebastian_Hagen2 on Moral Error and Moral Disagreement · 2008-08-11T15:04:03.000Z · LW · GW


But my moral code does include such statements as "you have no fundamental obligation to help other people." I help people because I like to.
While I consider myself an altruist in principle (I have serious akrasia problems in practice), I do agree with this statement. Altruists don't have any obligation to help people, it just often makes sense for them to do so; sometimes it doesn't, and then the proper thing for them is not to do it.


In the modern world, people have to make moral choices using their general intelligence, because there aren't enough "yuck" and "yum" factors around to give guidance on every question. As such, we shouldn't expect much more moral agreement from humans than from rational (or approximately rational) AIs.
There might not be enough "yuck" and "yum" factors around to offer direct guidance on every question, but they're still the basis for abstract rational reasoning. Do you think "paperclip optimizer"-type AIs are impossible? If so, why? There's nothing incoherent about a "maximize the number of paperclips over time" optimization criterion; if anything, it's a lot simpler than those in use by humans.

Eliezer Yudkowsky:

If I have a value judgment that would not be interpersonally compelling to a supermajority of humankind even if they were fully informed, then it is proper for me to personally fight for and advocate that value judgment, but not proper for me to preemptively build an AI that enforces that value judgment upon the rest of humanity.
I don't understand this at all. How is building a superintelligent AI not just a (highly effective, if you do it right) special method of personally fighting for your value judgement? Are you saying it's ok to fight for it, as long as you don't do it too effectively?

Comment by Sebastian_Hagen2 on Moral Error and Moral Disagreement · 2008-08-11T02:24:59.000Z · LW · GW

I think my highest goal in life is to make myself happy. Because I'm not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me.
If you had a chance to take a pill which would cause you to stop caring about your friends by permanently maxing out that part of your hapiness function regardless of whether you had any friends, would you take it?
Do non-psychopaths that given the chance would self-modify into psychopaths fall into the same moral reference frame as stable psychopaths?

Comment by Sebastian_Hagen2 on The Meaning of Right · 2008-07-30T20:17:00.000Z · LW · GW

After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
Humans depend on matter to survive, and increase entropy by doing so. Matter can be used for storage and computronium, negentropy for fueling computation. Both are limited and valuable (assuming physics doesn't allow for infinite-resource cheats) resources.

I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill.
Comparative advantage doesn't matter for powerful AIs at massively different power levels. It exists between some groups of humans because humans don't differ in intelligence all that much when you consider all of mind design space, and because humans don't have the means to easily build subservient-to-them minds which are equal in power to them.
What about a situation where Bob can defet Bill very quickly, take all its resources, and use them to implement a totally-subservient-to-Bob mind which is by itself better at everything Bob cares about than Bill was? Resolving the conflict takes some resources, but leaving Bill to use them a) inefficiently and b) for not-exactly-Bob's goals might waste (Bob's perspective) even more of them in the long run. Also, eliminating Bill means Bob has to worry about one less potential threat that it would otherwise need to keep in check indefinitely.

The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked.
You don't want to build an AI with certain goals and then add on hard-coded rules that prevent it from fulfilling those goals with maximum efficiency. If you put your own mind against that of the AI, a sufficiently powerful AI will always win that contest. The basic idea behind FAI is to build an AI that genuinely wants good things to happen; you can't control it after it takes off, so you put in your conception of "good" (or an algorithm to compute it) into the original design, and define the AI's terminal values based on that. Doing this right is an extremely tough technical problem, but why do you believe it may be impossible?

Comment by Sebastian_Hagen2 on The Meaning of Right · 2008-07-30T17:05:00.000Z · LW · GW

Constant [sorry for getting the attribution wrong in my previous reply] wrote:

We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.
I do not know whether those changes in opinion indicate changes in terminal values, but it doesn't really matter for the purposes of this discussion, since humans aren't (capital-F) Friendly. You definitely don't want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this kind of thing from happening, even in a strongly self-modifying mind (which humans aren't), is one of the sub-problems of the FAI problem.
To create a society of AIs, hoping they'll prevent each other from doing too much damage, isn't a viable solution to the FAI problem, even in the rudimentary "doesn't kill all humans" sense. There's various problems with the idea, among them:

  1. Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.
  2. The AIs might agree that they'd all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.
Comment by Sebastian_Hagen2 on The Meaning of Right · 2008-07-30T12:32:00.000Z · LW · GW

TGGP wrote:

We've been told that a General AI will have power beyond any despot known to history.
Unknown replied:
If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn't bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.
A tendency to become corrupt when placed into positions of power is a feature of some minds. Evolutionary psychology explains nicely why humans have evolved this tendency. It also allows you to predict that other intelligent organisms, evolved in a sufficiently similar way, would be likely to have a similar feature.
Humans having this kind of tendency is a predictable result of what their design was optimized to do, and as such them having it doesn't imply much for minds from a completely different part of mind design space.
What makes you think a human-designed AI would be vulnerable to this kind of corruption?

Comment by Sebastian_Hagen2 on The Meaning of Right · 2008-07-29T15:20:20.000Z · LW · GW

Thank you for this post. "should" being a label for results of the human planning algorithm in backward-chaining mode the same way that "could" is a label for results of the forward-chaining mode explains a lot. It's obvious in retrospect (and unfortunately, only in retrospect) to me that the human brain would do both kinds of search in parallel; in big search spaces, the computational advantages are too big not to do it.

I found two minor syntax errors in the post: "Could make sense to ..." - did you mean "Could it make sense to ..."? "(something that has a charge of should-ness" - that parenthesis is never closed.

Unknown wrote:

As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI.
Speak for yourself. I don't think EliezerYudkowsky::Right is quite the same function as SebastianHagen::Right, but I don't see a real chance of getting an AI that optimizes only for SebastianHagen::Right accepted as sysop. I'd rather settle for an acceptable compromise in what values our successor-civilization will be built on than see our civilization being stomped into dust by an entirely alien RPOP, or destroyed by another kind of existential catastrophe.

Comment by Sebastian_Hagen2 on Whither Moral Progress? · 2008-07-16T11:55:10.000Z · LW · GW

It's harder to answer Subhan's challenge - to show directionality, rather than a random walk, on the meta-level.
Even if one is ignorant of what humans mean when they talk about morality, or what aspects of the environment influence it, it should be possible to determine whether morality-development over time follows a random walk empirically: a random walk would, on average, cause more repeated reversals of a given value judgement than a directional process.
For performing this test, one would take a number of moral judgements that have changed in the past, and compare their development from a particular point in human history (the earlier, the better; unreversed recent changes may have been a result of the random walk only becoming sufficiently extreme in the recent past) to now, counting how often those judgements flipped during historical development. I'm not quite sure about the conditional probabilities, but a true random walk should result in more such flips than a directional (even a noisy directional) process.
Does anyone have suggestions for moral values that changed early in human development?

Comment by Sebastian_Hagen2 on Is Morality Preference? · 2008-07-05T13:14:40.000Z · LW · GW

Regarding the first question,

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?
I think the meaning of "it is (morally) right" may be easiest to explain through game theory. Humans in the EEA had plenty of chances for positive-sum interactions, but consistently helping other people runs the risk of being exploited by defection-prone agents. Accordingly, humans may have evolved a set of adaptions to exploit non-zero sumness between cooperating agents, but also avoid cooperating with defectors. Treating "X is (morally) right" as a warning of the form "If you don't do X, I will classify that as defection" explains a lot. Assume a person A has just (honestly) warned a person B that "X is the right thing to do":

If B continues not do X, A will likely be indignant; indignancy means A will be less likely to help B in the future (which makes sense according to game theory), and might also recommend the same to other members of the tribe. B might accept the claim about rightness; this will make it more likely for him to do the "right" thing. Since, in the EEA, being ostracized by the tribe would result in a significant hit to fitness, it's likely for there to be an adaption predisposing people to evaluate claims about rightness in this manner. B's short-term desires might override his sense of "moral rightness", leading to him doing the (in his own conception) "wrong" thing. While B can choose to do the wrong thing, he cannot change which action is right by a simple individual decision, since the whole point of evaluating rightness at all is to evaluate it the same way as other people you interact with.

According to this view, moral duties function as rules which help members of a society to identify defectors (by defectors violating them).

Comment by Sebastian_Hagen2 on The Bedrock of Fairness · 2008-07-03T15:08:04.000Z · LW · GW

This post reminds me a lot of DialogueOnFriendliness.

There's at least one more trivial mistake in this post:

Is their nothing more to the universe than their conflict?

Constant wrote:

Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.
If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

Comment by Sebastian_Hagen2 on The Moral Void · 2008-07-01T18:45:00.000Z · LW · GW

Hal Finney:
Why doesn't the AI do it verself? Even if it's boxed (and why would it be, if I'm convinced it's an FAI?), at the intelligence it'd need to make the stated prediction with any degree of confidence, I'd expect it to be able to take over my mind quickly. If what it claims is correct, it shouldn't have any qualms about doing that (taking over one human's body for a few minutes is a small price to pay for the utility involved).
If this happened in practice I'd be confused as heck, and the alleged FAI being honest about its intentions would be pretty far down on my list of hypotheses about what's going on. I'd likely stare into space dumbfounded until I found some halfway-likely explanation, or the AI decided to take over my mind after all.

Comment by Sebastian_Hagen2 on What Would You Do Without Morality? · 2008-06-29T21:25:00.000Z · LW · GW

Are there no vegetarians on OvBias?
I'm a vegetarian, though not because I particularly care about the suffering of meat animals.

Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.
Of course people change; that's why I talked about "future selves" - the interesting aspect isn't that they exist in the future, it's that they're not exactly the same person as I am now. However, there's still a lot of similarity between my present self and my one-second-in-the-future self, and they have effectively the same optimization target. Moreover, these changes are largly non-random and non-degenerative: a lot of them are a part of my mind improving its model of the universe and getting more effective at interacting with it.
I don't think it is appropriate to term such small changes "death". If an anvil drops on my head, crushing my brain to goo, I immediately lose more optimization power than I do in a decade of living without fatal accidents. The naive view of personal identity isn't completely accurate, but the reason that it works pretty well in practice is that (in our current society) humans don't change particularly quickly, except for when they suffer heavy injuries.

The anvil-dropped-on-head-scenario is what I envisioned in my last post: something annihilating or massively corrupting my mind, destroying the part that's responsible for evaluating the desirability of hypothetical states of the universe.

Comment by Sebastian_Hagen2 on What Would You Do Without Morality? · 2008-06-29T17:32:03.000Z · LW · GW

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I'm a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that's a result of my physical structure. I don't really know the algorithm I use for assigning utility, but that's because my design is pretty messed up. Nevertheless, there is an algorithm, and it's what I talk about when I use the words "right" and "wrong".
Moral rightness is fundamentally a two-place function: it takes both an optimization process and a hypothetical future as arguments. In practice, people frequently use the curried form, with themselves as the implied first argument.

Suppose I proved that all utilities equaled zero.
That result is obviously false for my present self. If the proof pertains to that entity, it's either incorrect or the formal system it is phrased in is inappropriate for modeling this aspect of reality.
It's also false for all of my possible future selves. I refuse to recognize something which doesn't have preferences over hypothetical futures as a future-self of me; whatever it is, it's lost too many important functions for that.

Comment by Sebastian_Hagen2 on Ghosts in the Machine · 2008-06-18T15:59:23.000Z · LW · GW

Here's my vision of this, as a short scene from a movie. Off my blog: The Future of AI
To me, the most obvious reading of that conversation is that a significant part of what the AI says is a deliberate lie, and Anna is about to be dumped into a fun-and-educational adventure game at the end. Did you intend that interpretation?

Comment by Sebastian_Hagen2 on Timeless Control · 2008-06-08T10:52:55.000Z · LW · GW


If you think as though the whole goal is to save on computing power, and that the brain is actually fairly good at this (it has to be), then you won't go far astray.
Ah, thanks! I hadn't considered why you would think about isolated subystems in practice; knowing about the motivation helps a lot in filling in the implementation details.

Comment by Sebastian_Hagen2 on Timeless Control · 2008-06-07T10:32:05.000Z · LW · GW

I'm trying to see exactly where your assertion that humans actually have choice comes in.
"choice" is a useful high-level abstraction of certain phenomena. It's a lossy abstraction, and if you had infinite amounts of memory and computing power, you would have no need for it, at least when reasoning about other entities. It exists, in exactly the same way in which books (the concept of a book is also a high-level abstraction) exist.
If that sounded wrong or like nonsense to you, please taboo "choice" and explain what exactly your question is.

I also have a question of my own, regarding the rock-hill-system:

If you isolate a subsystem of reality, like a rock rolling down hill, then you can mathematically define the future-in-isolation of that subsystem; you can take the subsystem in isolation, and compute what would happen to it if you did not act on it. In this case, what would happen is that the rock would reach the bottom of the hill.
How does this isolation work? Do you assume that the forces acting on the system from outside stay constant (in some undefined fashion), without explicitly modeling the outside? If I assume no further interactions with the outside, I don't expect to see the rock rolling down the hill, since there's no planet below to gravitationally attract it. Or was the planet supposed to be part of this system?

Comment by Sebastian_Hagen2 on Timeless Identity · 2008-06-04T07:28:00.000Z · LW · GW

What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?
I don't know, and unless you're trying to market it, I don't think it matters. People make silly judgements on many subjects, blindly copying the majority in this society isn't particularly good advice.

Each twin might feel strong regard for the other, but there's no way they would actually be completely indifferent between pain for themselves and pain for their twin.
Any reaction of this kind is either irrational, based on divergence which has already taken place, or based on value systems very different from my own. In real life, you'd probably get a mix of the first two, and possibly also the last, from most people.

If another 'me' were created on mars and then got a bullet in the head, this would be sad, but no more so than any other death. It wouldn't feel like a life-extending boon when he was created, nor a horrible blow to my immortality when he was destroyed.
For me, this would be a quantitative judgement: it depends on how much both instances have changed since the split. If the time lived before the split is significantly longer than that after, I would consider the other instance a near-backup, and judge the relevance of its destruction accordingly. Aside from the aspect of valuing the other person as a human like any other that also happens to share most of your values, it's effectively like losing the only (and somewhat out-of-date) backup of a very important file: No terrible loss if you can keep the original intact until you can make a new backup, but an increased danger in the meantime.

If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars? What if I did this a hundred times?
Maybe, maybe not, depends on the exact strategy I'd mapped out beforehand for what each of the copies will do after the split. If I didn't have enough foresight to do that beforehand, all of my instances would have to agree on the strategy (including allocation of initial resources) over IRC or wiki or something, which could get messy with a hundred of them - so please, if you ever do this, give me a week of advance warning. Splitting it up evenly might be ok in the case of two copies (assuming they both have comparable expected financial load and income in the near term), but would fail horribly for a hundred; there just wouldn't be enough money left for any of them to matter at all (I'm a poor university student, currently; I don't really have "life savings" in transferrable format).

Comment by Sebastian_Hagen2 on Timeless Identity · 2008-06-03T17:57:40.000Z · LW · GW

Is the 'you' on mars the same as 'you' on Earth?
There's one of you on earth, and one on mars. They start out (by assumption) the same, but will presumably increasingly diverge due to different input from the environment. What else is there to know? What does the word 'same' mean for you?

And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?
That's between your world model and your values. If this happened to me, I'd care because the other instance of myself happens to have similar values to the instance making the judgement, and will therefore try to steer the future into states which we will both prefer.

Comment by Sebastian_Hagen2 on My Childhood Role Model · 2008-05-23T11:20:10.000Z · LW · GW

But I don't buy the idea of intelligence as a scalar value.
Do you have a better suggestion for specifying how effective a system is at manipulating its environment into specific future states? Unintelligent systems may work much better in specific environments than others, but any really intelligent system should be able to adapt to a wide range of environments. Which important aspect of intelligence do you think can't be expressed in a scalar rating?

Comment by Sebastian_Hagen2 on The Dilemma: Science or Bayes? · 2008-05-13T20:05:00.000Z · LW · GW

They only depend to within a constant factor. That's not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you're proposing is, or is NOT, the shortest possible program to express the law.
I disagree; I think the underspecification is a more serious issue than the uncomputability. There are constant factors that outweigh, by a massive margin, all evidence ever collected by our species. Unless there's a way for us to get our hands on an infinite amount of cputime, there are constant factors that outweigh, by a massive margin, all evidence we will ever have a chance to collect. For any two strings, you can assign a lower complexity to either one by choosing the description language appropriately. Some way to make a good enough (not necessarily optimal) judgement on the language to use is needed for the complexity metric to make any sense.

The uncomputability is unfortunate, but hardly fatal. You can just spend some finite effort trying to find the shortest program that produces the each string, using the best heuristics available for this job, and use that as an approximation and upper bound. If you wanted to turn this into a social process, you could reward people for discovering shorter programs than the shortest-currently-known for existing theories (proving that they were simpler than known up to that point), as well as for collecting new evidence to discriminate between them.

Comment by Sebastian_Hagen2 on The Dilemma: Science or Bayes? · 2008-05-13T11:41:39.000Z · LW · GW

But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.
Computer programs in which language? The kolmogorov complexity of a given string depends on the choice of description language (or programming language, or UTM) used. I'm not familiar with MML, but considering that it's apparently strongly related to kolmogorov complexity, I'd expect its simplicity ratings to be similarly dependent on parameters for which there is no obvious optimal choice.

If one uses these metrics to judge the simplicity of hypotheses, any probability judgements based on them will ultimately depend strongly on this parameter choice. Given that, what's the best way to choose these parameters? The only two obvious ways I see are to either 1) Make an intuitive judgement, which means the resulting complexity ratings might not turn out any more reliable than if you intuitively judged the simplicity of each individual hypothesis, or 2) Figure out which of the resulting choices can be implemented cheaper in this universe; i.e. try to build the smallest/least-energy-using computer for each reasonably-seeming language, and see which one turns out cheapest. Since resource use at runtime doesn't matter for kolmogorov complexity, it would probably be appropriate to consider how well the designs would work if scaled up to include immense amounts of working memory, even if they're never actually built at that scale.

Neither of those is particularly elegant. I think 2) might work out, but unfortunately is quite sensitive to parameter choice, itself.

Comment by Sebastian_Hagen2 on The Failures of Eld Science · 2008-05-12T11:51:06.000Z · LW · GW

"A short time?" Jeffreyssai said incredulously. "How many minutes in thirty days? Hiriwa?"

"28800, sensei," she answered. "If you assume sixteen-hour waking periods and daily sleep, then 19200 minutes."I would have expected the answers to be 43200 (30d 24h/d 60/h) and 28800 (30d 16h/d 60/h), respectively. Do these people use another system for specifying time? It works out correctly if their hours have 40 minutes each.

Aside from that, this is an extremely insightful and quote-worthy post. I have^W^W My idiotic past-selves had a bad tendency to cognitively slow down in the absence of interesting and time-critical problems to solve. Accordingly, I find the hints about how to debug those tendencies very interesting. I find it rather quaint that those people still spend a significant part of their time sleeping, however.

Comment by Sebastian_Hagen2 on If Many-Worlds Had Come First · 2008-05-10T14:40:34.000Z · LW · GW

I hope the following isn't completely off-topic:

... if I'd been born into that time, instead of this one...
What exactly does a hypothetical scenario where "person X was born Y years earlier" even look like? I could see a somewhat plausible interpretation of that description in periods of extremely slow scientific and technological progress, but the twentieth century doesn't qualify. In the 1920s: 1) The concept of a turing machine hadn't been formulated yet. 2) There were no electronic computers. 3) ARPANET wasn't even an idea yet, and wouldn't be for decades. 4) Television was a novelty, years away from being used by a significant number of people. 5) WW1 was recent history.

Two persons with the same DNA and, except for results of global changes, very similar local environments during their childhood, would most likely turn into completely different adult humans if one of them was born in the 1920s and the other at some point in the last 30 years (roughly chosen to guarantee exposure to the idea of the internet as a teenager), and they both grew up in industrialized countries. The scientific and technological level one is born into is critical for mind development. What does it mean to consider a hypothetical world where a specific person was born into an environment very different in those respects? Why is this worth thinking about?

Comment by Sebastian_Hagen2 on On Being Decoherent · 2008-04-27T10:26:45.000Z · LW · GW

Maybe later I'll do a post about why you shouldn't panic about the Big World. You shouldn't be drawing many epistemic implications from it, let alone moral implications. As Greg Egan put it, "It all adds up to normality." Indeed, I sometimes think of this as Egan's Law.
While I'm not currently panicking about it, I'd be very interested in reading that explanation. It currently seems to me that there should be certain implications, e.g. in Quantum suicide experiments. If mangled worlds says that the entity perfoming such an experiment should not expect to survive many iterations, that doesn't solve the space-like version of the issue: Some of the person's alternate-selves on far away alternate-earths would be prevented from carrying out their plan by weird stuff (TM) coming in from space at just the right time.

Hopefully Anonymous asked:

10^(10^29) (is this different than 10^30?)
It's different by a factor of roughly 10^(10^29). Strictly speaking the factor is 10^(10^29-30), but making that distinction isn't much more meaningful than distinguishing between metres and lightyears at those distances.

Comment by Sebastian_Hagen2 on Where Physics Meets Experience · 2008-04-25T13:13:01.000Z · LW · GW

Good writing, indeed! I also love what you've done with the Eborrian anzrf (spoiler rot13-encoded for the benefit of other readers since it hasn't been mentioned in the previous comments).

The split/remerge attack on entities that base their anticipations of future input directly on how many of their future selves they expect to get specific input is extremely interesting to me. I originally thought that this should be a fairly straightforward problem to solve, but it has turned out a lot harder (or my understanding a lot more lacking) than I expected. I think the problem might be in the group of 500,003 brains double-counting anticipated input after the merge. They don't stay exactly the same through the merge phase; in fact, for each of the 500,000 brains in green rooms, the re-integrated previously-in-green-rooms brain only depends to a very small part on them individually. In this particular case, the re-integrated brain will still be very similar to each of the pre-integration brains; but that is just a result of the pre-integration brains all being very similar to each other. Treating the re-integrated brain as a regular future-self for the purposes of anticipating future experience under these conditions seems highly iffy to me.

Comment by Sebastian_Hagen2 on Three Dialogues on Identity · 2008-04-21T17:53:57.000Z · LW · GW

Similarly to "Zombies: The Movie", this was very entertaining, but I don't think I've learned anything new from it.

Z. M. Davis wrote:

Also, even if there are no moral facts, don't you think the fact that no existing person would prefer a universe filled with paperclips ...
Have you performed a comprehensive survey to establish this? Asserting "No existing person" in a civilization of 6.5e9 people amounts to assigning a probability of less than 1.54e-10 that a randomly chosen person would prefer a universe filled with paperclips. This is an extremely strong claim to make!

For example, note that the set of people alive includes a significant number of people who are certifiably insane, and in all probability others who, while reasonably sane, have gotten very fed up with various forms of torture inflicted on them over the last few days and might be willing to neglect collateral damage if they could make it stop.

If such a survey were performed, and the results were actually what you claim, I would assign a higher probability to the possibility of a nefarious anti-paperclip conspiracy having infiltrated the survey effort than to the possibility of the results being correct.

Unanimous agreement of our entire species is also a much stronger claim than you need to make for your argument.

Comment by Sebastian_Hagen2 on Configurations and Amplitude · 2008-04-10T11:05:19.000Z · LW · GW

For a rather silly reason, I wrote something about:

... explaining the lowest known layer of physics ...
Please ignore the "lowest known layer" part. I accidentally committed a mind projection fallacy while writing that comment.

Comment by Sebastian_Hagen2 on Configurations and Amplitude · 2008-04-10T09:44:56.000Z · LW · GW

A configuration can store a single complex value - "complex" as in the complex numbers (a + bi).
Any complex number? I.e. you're invoking an uncountable infinity for explaining the lowest known layer of physics? How does that fit in with being an infinite-set atheist - assuming you still hold that position?
I'm speaking as a nonphysicist reader, so I may well be missing something awfully obvious here. Any clarification would be appreciated.

Comment by Sebastian_Hagen2 on Belief in the Implied Invisible · 2008-04-08T13:43:49.000Z · LW · GW

To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back.
Ah! Now I see that my earlier claim about sane utility functions not valuing things that couldn't be measured even in principle was obviously bogus. Some commentors poked holes in the idea before, but a number of issues complicating the p-zombie case prevented me from seeing how big those were. This example made it really clear to me. Thank you!

Comment by Sebastian_Hagen2 on GAZP vs. GLUT · 2008-04-07T05:34:29.000Z · LW · GW

And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, ...
TMs also have the notable ability to not halt for some inputs. And if you wanted to precompute those results, writing NULL values into your GLUT, I'd really like to know where the heck you got your Halting Oracle from. The mathematical structures are very different. For a UTM, the problem of whether it will halt for an arbitrary input is undecidable; in a GLUT with NULL values, you can just look up the input string and are done.

Comment by Sebastian_Hagen2 on Zombie Responses · 2008-04-05T09:56:14.000Z · LW · GW

Posting here since the other post is now at exactly 50 replies: Re michael vassar: Sane utility functions pay attention to base rates, not just evidence, so even if it's impossible to measure a difference in principle one can still act according to a probability distribution over differences. You're right, in principle. But how would you estimate a base rate in the absence of all empirical data? By simply using your priors? I pretty much completely agree with the rest of your paragraph.

Re Nick Tarleton: (1) an entity without E can have identical outward behavior to an entity with E (but possibly different physical structure); and (2) you assign intrinsic value to at least some entities with E, but none without it? If so, do you have property E? As phrased, this is too vague to answer; for one thing, "identical outward behaviour" under what circumstances? Presumably not all conceivable ones ("What if you take it apart atom by atom using MNT?"), otherwise it couldn't have a different physical structure. If you rephrased it to be precise, I strongly suspect that I would genuinely not know the answer without a lot of further research; in fact, without that research, I couldn't even be sure that there is any E for which both of your premises hold. I'm a human, and I don't really know how my value system works in edge cases. Estimating the intrinsic value of general information-processing devices with a given behaviour is pretty far removed from the cases it was originally optimized to judge.

Comment by Sebastian_Hagen2 on Zombies! Zombies? · 2008-04-04T18:35:56.000Z · LW · GW

Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure. For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p-zombie and a "conscious" entity. In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function. My ethics are by necessity limited to valuing things whose presence/absence I have some way to measure, at least in principle. If they weren't, I'd have to worry about epiphenomenal pink unicorns all the time.

Does your brain assume/think it creates sensory experiences (or what people often call consciousness)? It thinks that it receives data from its environment and processes it, maintaining a (somewhat crude) model of that environment, to create output that manipulates the environment in a predictable manner. It doesn't think that there's any non-measurable consequences of that data processing (once again: that'd be dead code in the model). If that doesn't answer your query, please state it more clearly; specifically rationalist-taboo the word "experience".

Comment by Sebastian_Hagen2 on Zombies! Zombies? · 2008-04-04T14:44:57.000Z · LW · GW

Your brain assumes that you have qualia Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain.

If qualia-concepts are shown in some point in the future to be useful in understanding the real world, i.e. specify a compact border around a high-density region of thingspace, my brain will likely become interested in them when that happens. However, this will necessarily mean that they're shown to refer to things that are actually measurable. Possibly clusters of atoms, but many kinds of exotic physical entites postulated by substance dualists would also work.

As Eliezer Yudkowsky mentioned, epiphenomenalism includes parts in a prediction program which are known to be dead code. That dead code won't ever interest my brain, except possibly to figure out where exactly the design fault in the human brain which causes some people to become epiphenomenalists is.

Comment by Sebastian_Hagen2 on Zombies! Zombies? · 2008-04-04T12:04:51.000Z · LW · GW

Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.

Comment by Sebastian_Hagen2 on Angry Atoms · 2008-03-31T11:00:46.000Z · LW · GW

I believe there's a theorem which states that the problem of producing a Turing machine which will give output Y for input X is uncomputable in the general case. What? That's trivial to do; a very simple general method would be to use a lookup table. Maybe you meant the inverse problem?

WHY is a human being conscious? I don't understand this question. Please rephrase while rationalist-tabooing the word 'conscious'.

Comment by Sebastian_Hagen2 on Joy in Discovery · 2008-03-21T11:04:26.000Z · LW · GW

I wonder how this relates to tracking down hard-to-find bugs in computer programs.

And that the tremendous high comes from having hit the problem from every angle you can manage, and having bounced; and then having analyzed the problem again, using every idea you can think of, and all the data you can get your hands on - making progress a little at a time - so that when, finally, you crack through the problem, all the dangling pieces and unresolved questions fall into place at once, like solving a dozen locked-room murder mysteries with a single clue.

This sounds very similar to trying to track down a tricky bug to me. I was going to say that bug-hunting is also almost always original discovery, but the everett-branch/tegmark duplicate argument demolishes that idea. One important difference betwen bug-hunting and scientific discovery is probably the expected effort; even well-hidden bugs usually don't take months to track down if the programmer focuses on the task.