Posts

Comments

Comment by DefectiveAlgorithm on Zombies Redacted · 2016-07-03T21:51:20.190Z · LW · GW

I didn't say I knew which parts of the brain would differ, but to conclude therefore that it wouldn't is to confuse the map with the territory.

Comment by DefectiveAlgorithm on Zombies Redacted · 2016-07-03T18:31:56.560Z · LW · GW

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible?

Such an entity is possible, but would not be an atom-exact copy of you.

Comment by DefectiveAlgorithm on Avoiding strawmen · 2016-06-25T10:12:59.972Z · LW · GW

...Has someone been mass downvoting you?

Comment by DefectiveAlgorithm on How To Win The AI Box Experiment (Sometimes) · 2015-09-17T04:00:58.565Z · LW · GW

What if you're like me and consider it extremely implausible that even a strong superintelligence would be sentient unless explicitly programmed to be so (or at least deliberately created with a very human-like cognitive architecture), and that any AI that is sentient is vastly more likely than a non-sentient AI to be unfriendly?

Comment by DefectiveAlgorithm on The Consequences of Dust Theory. · 2015-07-09T22:07:52.596Z · LW · GW

I've never heard of 'Dust Theory' before, but I should think it follows trivially from most large multiverse theories, does it not?

Comment by DefectiveAlgorithm on Crazy Ideas Thread · 2015-07-09T21:56:31.287Z · LW · GW

Trigger warning: memetic hazard.

Abj guvax nobhg jung guvf zrnaf sbe nalbar jub unf rire qvrq (be rire jvyy).

I'm not too concerned, but primarily because I still have a lot of uncertainty as to how to approach that sort of question. My mind still spits out some rather nasty answers.

EDIT: I just realized that you were probably intentionally implying exactly what I just said, which makes this comment rather redundant.

Comment by DefectiveAlgorithm on Beware the Nihilistic Failure Mode · 2015-07-09T20:52:01.578Z · LW · GW

What bullet is that? I implicitly agreed that murder is wrong (as per the way I use the word 'wrong') when I said that your statement wasn't a misinterpretation. It's just that as I mentioned before, I don't care a whole lot about the thing that I call 'morality'.

Comment by DefectiveAlgorithm on Beware the Nihilistic Failure Mode · 2015-07-09T20:33:22.750Z · LW · GW

What I meant when I called myself a nihilist was essentially that there was no such thing as an objective, mind-independent morality. Nothing more. I would still consider myself a nihilist in that sense (and I expect most on this site would), but I don't call myself that because it could cause confusion.

Can you explain how the statement 'A world in which everyone but me does not murder is preferable to a world in which everyone including me does not murder' is a misinterpretation of this quotation?

It isn't, although that doesn't mean I would necessarily murder in such a world.

EDIT: Well, my nihilism was also a justification for the belief that it's silly to care about morality, and in that sense at least I'm no longer a nihilist in the sense that I was. That was just one aspect of my 'my eccentricities make me superior, everyone else's eccentricities are silly' phase, which I think I moved beyond around the time I stopped being a teenager.

Comment by DefectiveAlgorithm on Beware the Nihilistic Failure Mode · 2015-07-09T20:21:18.363Z · LW · GW

That's my point. You're saying the 'nihilists' are wrong, when you may in fact be disagreeing with a viewpoint that most nihilists don't actually hold on account of them using the words 'nihilism' and/or 'morality' differently to you. And yeah, I suppose in that sense my 'morality' does tie into my actual values, but only my values as applied to an unrealistic thought experiment, and then again a world in which everyone but me adhered to my notions of morality (and I wasn't penalized for not doing so) would still be preferable to me than a world in which everyone including me did.

Comment by DefectiveAlgorithm on Beware the Nihilistic Failure Mode · 2015-07-09T19:54:12.735Z · LW · GW

I mean that what I call my 'morality' isn't intended to be a map of my utility function, imperfect or otherwise. Along the same lines, you're objecting that self-proclaimed moral nihilists have an inaccurate notion of their own utility function, when it's quite possible that they don't consider their 'moral nihilism' to be a statement about their utility function at all. I called myself a moral nihilist for quite a while without meaning anything like what you're talking about here. I knew that I had preferences, I knew (roughly) what those preferences were, I would knowingly act on those preferences, and I didn't consider my nihilism to be in conflict with that at all. I still wouldn't. As for what I do mean by morality, it's kinda hard to put into words, but if I had to try I'd probably go with something like 'the set of rules of social function and personal behavior which result in as desirable a world as possible the more closely they are followed by the general population, given that one doesn't get to choose what one's position in that world is'.

EDIT: But that probably still doesn't capture my true meaning, because my real motive was closer to something like 'society's full of people coming up with ideas of right and wrong the adherence to which wouldn't create societies that would actually be particularly great to live in, so, being a rather competitive person, I want to see if I can do better', nothing more.

Comment by DefectiveAlgorithm on Beware the Nihilistic Failure Mode · 2015-07-09T19:15:31.614Z · LW · GW

Personally, when I use the word 'morality' I'm not using it to mean 'what someone values'. I value my own morality very little, and developed it mostly for fun. Somewhere along the way I think I internalized it at least a little, but it still doesn't mean much to me, and seeing it violated has no perceivable impact on my emotional state. Now, this may just be unusual terminology on my part, but I've found that a lot of people at least appear based on what they say about 'morality' to be using the term similarly to myself.

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-03-17T01:22:52.117Z · LW · GW

I think a big part of it is that I don't really care about other people except instrumentally. I care terminally about myself, but only because I experience my own thoughts and feelings first-hand. If I knew I were going to be branched, then I'd care about both copies in advance as both are valid continuations of my current sensory stream. However, once the branch had taken place, both copies would immediately stop caring about the other (although I expect they would still practice altruistic behavior towards each other for decision-theoretic reasons). I suspect this has also influenced my sense of morality: I've never been attracted to total utilitarianism, as I've never been able to see why the existence of X people should be considered superior to the existence of Y < X equally satisfied people.

So yeah, that's part of it, but not all of it (if that were the extent of it, I'd be indifferent to the existence of copies, not opposed to it). The rest is hard to put into words, and I suspect that even were I to succeed in doing so I'd only have succeeded in manufacturing a verbal rationalization. Part of it is instrumental, each copy would be a potential competitor, but that's insufficient to explain my feelings on the matter. This wouldn't be applicable to, say, the Many-Worlds Interpretation of quantum mechanics, and yet I'm still bothered by that interpretation as it implies constant branching of my identity. So in the end, I think that I can't offer a verbal justification for this preference precisely because it's a terminal preference.

Comment by DefectiveAlgorithm on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 118 · 2015-03-10T04:19:56.816Z · LW · GW

Approximately the same extent to which I'd consider myself to exist in the event of any other form of information-theoretic death. Like, say, getting repeatedly shot in the head with a high powered rifle, or having my brain dissolved in acid.

Comment by DefectiveAlgorithm on Superintelligence 24: Morality models and "do what I mean" · 2015-02-24T20:54:34.583Z · LW · GW

I mean the sufficiency of the definition given. Consider a universe which absolutely, positively, was not created by any sort of 'god', the laws of physics of which happen to be wired such that torturing people lets you levitate, regardless of whether the practitioner believes he has any sort of moral justification for the act. This universe's physics are wired this way not because of some designer deity's idea of morality, but simply by chance. I do not believe that most believers in objective morality would consider torturing people to be objectively good in this universe.

Comment by DefectiveAlgorithm on Superintelligence 24: Morality models and "do what I mean" · 2015-02-24T17:19:37.838Z · LW · GW

Hm. I'll acknowledge that's consistent (though I maintain that calling that 'morality' is fairly arbitrary), but I have to question whether that's a charitable interpretation of what modern believers in objective morality actually believe.

Comment by DefectiveAlgorithm on Superintelligence 24: Morality models and "do what I mean" · 2015-02-24T17:11:31.338Z · LW · GW

Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it's relevant, it gives you advantages you wouldn't otherwise have. Though even in the sense you've described, I'm not sure whether the word 'morality' really seems applicable. If torturing people let us levitate, would we call that 'objective morality'?

EDIT: To be clear, my intent isn't to nitpick. I'm simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn't obviously seem to equate those patterns with 'morality' in any sense of the word that I'm familiar with.

Comment by DefectiveAlgorithm on Superintelligence 24: Morality models and "do what I mean" · 2015-02-24T16:09:12.822Z · LW · GW

I have no idea what 'there is an objective morality' would mean, empirically speaking.

Comment by DefectiveAlgorithm on Can we decrease the risk of worse-than-death outcomes following brain preservation? · 2015-02-22T13:32:36.230Z · LW · GW

More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we'd like.

Comment by DefectiveAlgorithm on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-21T12:32:30.853Z · LW · GW

I don't think Harry meant to imply that actually running this test would be nice, but rather that one cannot even think of running this test without first thinking of the possibility of making a horcrux for someone else (something which is more-or-less nice-ish in itself, the amorality inherent in creating a horcrux at all notwithstanding).

Comment by DefectiveAlgorithm on [LINK] Wait But Why - The AI Revolution Part 2 · 2015-02-08T03:36:33.543Z · LW · GW

A paperclip maximizer won't wirehead because it doesn't value world states in which its goals have been satisfied, it values world states that have a lot of paperclips.

In fact, taboo 'values'. A paperclip maximizer is an algorithm the output of which approximates whichever output leads to world states with the greatest expected number of paperclips. This is the template for maximizer-type AGIs in general.

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-01-25T21:31:42.317Z · LW · GW

Because I terminally value the uniqueness of my identity.

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-01-24T12:15:14.834Z · LW · GW

What would an AI that 'cares' in the sense you spoke of be able to do to address this problem that a non-'caring' one wouldn't?

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-01-22T23:45:45.129Z · LW · GW

Kind of. I wouldn't defect against my copy without his consent, but I would want the pool trimmed down to only a single version of myself (ideally whichever one had the highest expected future utility, all else equal). The copy, being a copy, should want the same thing. The only time I wouldn't be opposed to the existence of multiple instances of myself would be if those instances could regularly synchronize their memories and experiences (and thus constitute more a single distributed entity with mere synchronization delays than multiple diverging entities).

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-01-22T04:52:54.243Z · LW · GW

Leaving aside other matters, what does it matter if an FAI 'cares' in the sense that humans do so long as its actions bring about high utility from a human perspective?

Comment by DefectiveAlgorithm on Learn Three Things Every Day · 2015-01-21T18:22:50.677Z · LW · GW

This post starts off on a rather spoiler-ish note.

Comment by DefectiveAlgorithm on Superintelligence 19: Post-transition formation of a singleton · 2015-01-21T17:37:57.943Z · LW · GW

My first thought (in response to the second question) is 'immediately terminate myself, leaving the copy as the only valid continuation of my identity'.

Of course, it is questionable whether I would have the willpower to go through with it. I believe that my copy's mind would constitute just as 'real' a continuation of my consciousness as would my own mind following a procedure that removed the memories of the past few days (or however long since the split) whilst leaving all else intact (which is of course just a contrived-for-the-sake-of-the-thought-experiment variety of the sort of forgetting that we undergo all the time), but I have trouble alieving it.

Comment by DefectiveAlgorithm on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda · 2014-11-27T00:38:19.323Z · LW · GW

Even leaving aside the matters of 'permission' (which lead into awkward matters of informed consent) as well as the difficulties of defining concepts like 'people' and 'property', define 'do things to X'. Every action affects others. If you so much as speak a word, you're causing others to undergo the experience of hearing that word spoken. For an AGI, even thinking draws a miniscule amount of electricity from the power grid, which has near-negligible but quantifiable effects on the power industry which will affect humans in any number of different ways. If you take chaos theory seriously, you could take this even further. It may seem obvious to a human that there's a vast difference between innocuous actions like those in the above examples and those that are potentially harmful, but lots of things are intuitively obvious to humans and yet turn out to be extremely difficult to precisely quantify, and this seems like just such a case.

Comment by DefectiveAlgorithm on What are your contrarian views? · 2014-11-26T11:24:11.890Z · LW · GW

I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.

To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?

Comment by DefectiveAlgorithm on What are your contrarian views? · 2014-11-26T01:49:08.340Z · LW · GW

Can you define 'terminal values', in the context of human beings?

Comment by DefectiveAlgorithm on Stupid Questions (10/27/2014) · 2014-10-30T08:03:12.727Z · LW · GW

If the universe is infinite, then there are infinitely many copies of me, following the same algorithm

Does this follow? The set of computable functions is infinite, but has no duplicate elements.

Comment by DefectiveAlgorithm on Fighting Biases and Bad Habits like Boggarts · 2014-08-22T17:45:34.442Z · LW · GW

"Comments (1)"

"There doesn't seem to be anything here."

????

Comment by DefectiveAlgorithm on A simple game that has no solution · 2014-07-23T01:55:32.121Z · LW · GW

I think this should get better and better for P1 the closer P1 gets to (2/3)C (1/3)B (without actually reaching it).

Comment by DefectiveAlgorithm on Advice for AI makers · 2014-06-29T07:28:26.002Z · LW · GW

How so?

Comment by DefectiveAlgorithm on Advice for AI makers · 2014-06-29T02:47:36.858Z · LW · GW

I do think 'a disagreement on utility calculations' may indeed be a big part of it. Are you a total utilitarian? I'm not. A big part of that comes from the fact that I don't consider two copies of myself to be intrinsically more valuable than one - perhaps instrumentally valuable, if those copies can interact, sync their experiences and cooperate, but that's another matter. With experience-syncing, I am mostly indifferent to the number of copies of myself to exist (leaving aside potential instrumental benefits), but without it I evaluate decreasing utility as the number of copies increases, as I assign zero terminal value to multiplicity but positive terminal value to the uniqueness of my identity.

My brand of utilitarianism is informed substantially by these preferences. I adhere to neither average nor total utilitarianism, but I lean closer to average. Whilst I would be against the use of force to turn a population of 10 with X utility each into a population of 3 with (X + 1) utility each, I would in isolation consider the latter preferable to the former (there is no inconsistency here - my utility function simply admits information about the past).

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-24T11:50:49.241Z · LW · GW

Well, ok, but if you agree with this then I don't see how you can claim that such a system would be particularly useful for solving FAI problems.

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-24T09:55:20.036Z · LW · GW

Ok, but a system like you've described isn't likely to think about what you want it to think about or produce output that's actually useful to you either.

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-24T09:10:11.836Z · LW · GW

an Oracle AI you can trust

That's a large portion of the FAI problem right there.

EDIT: To clarify, by this I don't mean to imply that FAI is easy, but that (trustworthy) Oracle AI is hard.

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-19T22:34:06.241Z · LW · GW

No. Clippy cannot be persuaded away from paperclipping because maximizing paperclips is its only terminal goal.

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-19T19:45:25.112Z · LW · GW

If acquiring bacon was your ONLY terminal goal, then yes, it would be irrational not to do absolutely everything you could to maximize your expected bacon. However, most people have more than just one terminal goal. You seem to be using 'terminal goal' to mean 'a goal more important than any other'. Trouble is, no one else is using it this way.

EDIT: Actually, it seems to me that you're using 'terminal goal' to mean something analogous to a terminal node in a tree search (if you can reach that node, you're done). No one else is using it that way either.

Comment by DefectiveAlgorithm on On Terminal Goals and Virtue Ethics · 2014-06-19T19:40:10.047Z · LW · GW

Consider an agent trying to maximize its Pacman score. 'Getting a high Pacman score' is a terminal goal for this agent - it doesn't want a high score because that would make it easier for it to get something else, it simply wants a high score. On the other hand, 'eating fruit' is an instrumental goal for this agent - it only wants to eat fruit because that increases its expected score, and if eating fruit didn't increase its expected score then it wouldn't care about eating fruit.

That is the only difference between the two types of goals. Knowing that one of an agent's goals is instrumental and another terminal doesn't tell you which goal the agent values more.

Comment by DefectiveAlgorithm on Fake Utility Functions · 2014-06-19T14:02:45.800Z · LW · GW

a terminal goal of interpreting instructions correctly

There is a huge amount of complexity hidden beneath this simple description.

Comment by DefectiveAlgorithm on Total Utility is Illusionary · 2014-06-15T05:27:47.554Z · LW · GW

Isn't this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn't this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?

Comment by DefectiveAlgorithm on Curiosity: Why did you mega-downvote "AI is Software" ? · 2014-06-12T20:52:19.252Z · LW · GW

The primary issue? No matter how many times I read your post, I still don't know what your claim actually is.

Comment by DefectiveAlgorithm on What do rationalists think about the afterlife? · 2014-05-16T06:27:30.704Z · LW · GW

Is this any more than a semantic quibble?

Comment by DefectiveAlgorithm on What do rationalists think about the afterlife? · 2014-05-14T12:54:43.486Z · LW · GW

While I do still find myself quite uncertain about the concept of 'quantum immortality', not to mention the even stronger implications of certain multiverse theories, these don't seem to be the kind of thing that you're talking about. I submit that 'there is an extant structure not found within our best current models of reality isomorphic to a very specific (and complex) type of computation on a very specific (and complex) set of data (ie your memories and anything else that comprises your 'identity')' is not a simple proposition.

Comment by DefectiveAlgorithm on What are some science mistakes you made in college? · 2014-03-24T07:43:50.017Z · LW · GW

After reading this, I became incapable of giving finite time estimates for anything. :/

Comment by DefectiveAlgorithm on Reference Frames for Expected Value · 2014-03-17T16:28:31.238Z · LW · GW

Isn't expected value essentially 'actual value, to the extent that it is knowable in my present epistemic state'? Expected value reduces to 'actual value' when the latter is fully knowable.

EDIT: Oh, you said this in the post. This is why I should read a post before commenting on it.

Comment by DefectiveAlgorithm on Embracing the "sadistic" conclusion · 2014-02-13T11:32:14.541Z · LW · GW

This is (one of the reasons) why I'm not a total utilitarian (of any brand). For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats), but I haven't yet found or devised a formalization which captures the complexities of my moral intuitions when applied to others.

Comment by DefectiveAlgorithm on Terminal and Instrumental Beliefs · 2014-02-13T10:01:58.563Z · LW · GW

This sounds a lot like quantum suicide, except... without the suicide. So those versions of yourself who don't get what they want (which may well be all of them) still end up in a world where they've experienced not getting what they want. What do those future versions of yourself want then?

EDIT: Ok, this would have worked better as a reply to Squark's scenario, but it still applies whenever this philosophy of yours is applied to anything directly (in the practical sense) observable.

Comment by DefectiveAlgorithm on L-zombies! (L-zombies?) · 2014-02-08T08:25:09.882Z · LW · GW

If L-zombies have conscious experience (even when not being 'run'), does the concept even mean anything? Is there any difference, even in principle, between such an L-zombie and a 'real' person?