Posts

Total Utility is Illusionary 2014-06-15T02:43:50.049Z

Comments

Comment by PlatypusNinja on Total Utility is Illusionary · 2014-06-15T05:59:47.802Z · LW · GW

I think the key difference is that delta utilitarianism handles it better when the group's utility function changes. For example, if I create a new person and add it to the group, that changes the group's utility function. Under delta utilitarianism, I explicitly don't count the preferences of the new person when making that decision. Under total utilitarianism, [most people would say that] I do count the preferences of that new person.

Comment by PlatypusNinja on Total Utility is Illusionary · 2014-06-15T05:51:34.612Z · LW · GW

I suppose you could say that it's equivalent to "total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function".

(Under mere "total utilitarianism that only takes into account the utility of already extant people", the government could wirehead its constituency.)


Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.

Comment by PlatypusNinja on Total Utility is Illusionary · 2014-06-15T05:18:25.080Z · LW · GW

My intended solution was that, if you check the utility of your constituents from creating more people, you're explicitly not taking the utility of the new people into account. I'll add a few sentences at the end of the article to try to clarify this.

Another thing I can say is that, if you assume that everyone's utility is zero at the decision point, it's not clear why you would see a utility gain from adding more people.

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-15T22:51:43.124Z · LW · GW

...Followup: Holy crap! I know exactly one person who wants Hermione to be defeated by Draco when Lucius is watching. Could H&C be Dumbledore?

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-15T22:49:18.119Z · LW · GW

My theory is that Lucius trumped up these charges against Hermione entirely independent of the midnight duel. He was furious that Hermione defeated Draco in combat, and this is his retaliation.

I doubt that Hermione attended the duel; or, if she did attend it, I doubt that anything bad happened.

My theory does not explain why Draco isn't at breakfast. So maybe my theory is wrong.


I am confused about why H&C wanted Hermione to be defeated by Draco during the big game when Lucius was watching. If you believe H&C is Quirrell (and I do): did Quirrell go to all that trouble just to impress Lucius with how his son was doing? That seems like an awful risk for not much reward.

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-12T21:55:40.483Z · LW · GW

The new Update Notifications features (http://hpmor.com/notify/) is pretty awesome but I have a feature request. Could we get some sort of privacy policy for that feature?

Like, maybe just a sentence at the bottom saying "we promise to only use your email address to send you HPMOR notifications, and we promise never to share your email address with a third party"?

It's not that I don't trust you guys (and in fact I have already signed up) but I like to check on these things.

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread · 2011-08-24T19:42:22.842Z · LW · GW

I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.

But I went back much later and read it again, and there wasn't nearly as much outrage as I remembered.

Good story!

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread · 2010-05-27T21:31:36.557Z · LW · GW

Ouch! I -- I actually really enjoyed Ender's Game. But I have to admit there's a lot of truth in that review.

Now I feel vaguely guilty...

Comment by PlatypusNinja on Harry Potter and the Methods of Rationality discussion thread · 2010-05-27T01:44:46.669Z · LW · GW

I found this series much harder to enjoy than Eliezer's other works -- for example the Super Happy People story, the Brennan stories, or the Sword of Good story.

I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.

At first, before I knew who the author was, I put this down to simple bad writing. Comments in Chapter 6 suggest that maybe Harry has some severe psychological issues, and that he's deliberately being written as obnoxious and hyperactive in order to meet plot criteria later.

But it's still sort of annoying to read.

I did enjoy the exchange with Draco in Chapter 5, mind.

(I encountered the series several weeks ago, without an attribution for the author. I read through Chapter 6 and stopped. Now that I know it was by Eliezer, I may go back and read a few more chapters.)

Comment by PlatypusNinja on [deleted post] 2010-04-19T19:38:03.739Z

This isn't visible, right? I will feel very bad if it turns out I am spamming the community with half-finished drafts of the same article.

Comment by PlatypusNinja on Attention Lurkers: Please say hi · 2010-04-19T05:55:38.312Z · LW · GW

Hi! I'd like to suggest two other methods of counting readers: (1) count the number of usernames which have accessed the site in the past seven days (2) put a web counter (Google Analytics?) on the main page for a week (embed it in your post?) It might be interesting to compare the numbers.

Comment by PlatypusNinja on The (Boltzmann) Brain-In-A-Jar · 2010-03-31T23:18:59.081Z · LW · GW

The good news is that this pruning heuristic will probably be part of any AI we build. (In fact, early forms of this AI will have to use a much stronger version of this heuristic if we want to keep them focused on the task at hand.)

So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)

Comment by PlatypusNinja on It's not like anything to be a bat · 2010-03-31T18:49:09.394Z · LW · GW

The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.

This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anthropic argument, then I think we have to conclude that the anthropic argument is wrong and useless.

I would be interested in hearing examples of applications of the anthropic argument which are not vulnerable to the "depending on your reference class you get results that are either completely bogus or, in the best case, unverifiable" counterargument.

(I don't mean to pick on you specifically; lots of commentors seem to have made the above claim, and yours was simply the most well-explained.)

Comment by PlatypusNinja on What is Bayesianism? · 2010-02-27T19:56:44.098Z · LW · GW

Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.

But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;

Comment by PlatypusNinja on What is Bayesianism? · 2010-02-26T23:03:52.189Z · LW · GW

A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

Given no other information, we don't know which is more likely. We need numbers for "rarely", "most", and "exceedingly few". For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a headache, but 1% of humans have a brain tumor, then the brain tumor is actually more likely.

(The calculation we're performing is: compare ("rarely" times "most") to "exceedingly few" and see which one is larger.)

Comment by PlatypusNinja on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T20:15:06.075Z · LW · GW

I would like to know more about your statement "50,000 users would surely count as a critical mass". How many users does Craigslist have in total?

I especially think it's unlikely that Craigslist would be motivated by the opinions of 50,000 Facebook users, especially if you had not actually conducted a poll but merely collected the answers of those that agree with you.

You should contact Craigslist and ask them what criteria would actually convince them that Craigslist users want for-charity ads.

Comment by PlatypusNinja on Shut Up and Divide? · 2010-02-10T05:41:28.114Z · LW · GW

each person could effectively cause $20,000 to be generated out of nowhere

As a rationalist, when you see a strange number like this, you have to ask yourself: Did I really just discover a way to make lots of money very efficiently? Or could it be that there was a mistake in my arithmetic somewhere?

That one billion dollars is not being generated out of nowhere. It is being generated as payment for ad clicks.
Let's check your assumptions: How much money will the average user generate from banner ad clicks in five years? How many users does Craigslist have? What fraction of those users would have to request banner ads, for Craigslist to add them?

My completely uneducated guess is 100$, ten million, and 50%. This matches your "generate one billion dollars" number but suggests that critical mass would be five million rather than fifty thousand. Note, also, that Facebook users are not necessarily Craigslist users.

I would be interested to hear what numbers you are using. Mine could easily be wrong.

Comment by PlatypusNinja on A Much Better Life? · 2010-02-09T18:18:06.416Z · LW · GW

So, people who have a strong component of "just be happy" in their utility function might choose to wirehead, and people in which other components are dominant might choose not to.

That sounds reasonable.

Comment by PlatypusNinja on A Much Better Life? · 2010-02-07T10:46:57.437Z · LW · GW

Well, I said most existing humans are opposed to wireheading, not all. ^_^;

Addiction might occur because: (a) some people suffer from the bug described above; (b) some people's utility function is naturally "I want to be happy", as in, "I want to feel the endorphin rush associated with happiness, and I do not care what causes it", so wireheading does look good to their current utility function; or (c) some people underestimate an addictive drug's ability to alter their thinking.

Comment by PlatypusNinja on A Much Better Life? · 2010-02-04T18:29:41.379Z · LW · GW

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.

This AI has the ability to rewrite itself to "while(true) { happy(); }". It evaluates this action in terms of its current utility function: "If I wirehead myself, how many paperclips will I produce?" vs "If I don't wirehead myself, how many paperclips will I produce?" It sees that not wireheading is the better choice.

If, for some reason, I've written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I've simply written a very large amount of source code that compiles to "while(true) { happy(); }".

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

Comment by PlatypusNinja on A Much Better Life? · 2010-02-04T18:18:49.546Z · LW · GW

Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.

Comment by PlatypusNinja on The Moral Status of Independent Identical Copies · 2009-12-01T22:54:01.596Z · LW · GW

I think I am happy with how these rules interact with the Anthropic Trilemma problem. But as a simpler test case, consider the following:

An AI walks into a movie theater. "In exchange for 10 utilons worth of cash", says the owner, "I will show you a movie worth 100 utilons. But we have a special offer: for only 1000 utilons worth of cash, I will clone you ten thousand times, and every copy of you will see that same movie. At the end of the show, since every copy will have had the same experience, I'll merge all the copies of you back into one."

Note that, although AIs can be cloned, cash cannot be. ^_^;

I claim that a "sane" AI is one that declines the special offer.

Comment by PlatypusNinja on The Moral Status of Independent Identical Copies · 2009-12-01T22:43:59.210Z · LW · GW

(I'm not sure what the rule is here for replying to oneself. Apologies if this is considered rude; I'm trying to avoid putting TLDR text in one comment.)

Here is a set of utility-rules that I think would cause an AI to behave properly. (Would I call this "Identical Copy Decision Theory"?)

  • Suppose that an entity E clones itself, becoming E1 and E2. (We're being agnostic here about which of E1 and E2 is the "original". If the clone operation is perfect, the distinction is meaningless.) Before performing the clone, E calculates its expected utility U(E) = (U(E1)+U(E2))/2.

  • After the cloning operation, E1 and E2 have separate utility functions: E1 does not care about U(E2). "That guy thinks like me, but he isn't me."

  • Suppose that E1 and E2 have some experiences, and then they are merged back into one entity E' (as described in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ and elsewhere). Assuming this merge operation is possible (because the experiences of E1 and E2 were not too bizarrely disjoint), the utility of E' is the average: U(E') = (U(E1) + U(E2))/2.

Comment by PlatypusNinja on The Moral Status of Independent Identical Copies · 2009-12-01T22:33:18.253Z · LW · GW

It's difficult to answer the question of what our utility function is, but easier to answer the question of what it should be.

Suppose we have an AI which can duplicate itself at a small cost. Suppose the AI is about to witness an event which will probably make it happy. (Perhaps the AI was working to get a law passed, and the vote is due soon. Perhaps the AI is maximizing paperclips, and a new factory has opened. Perhaps the AI's favorite author has just written a new book.)

Does it make sense that the AI would duplicate itself in order to witness this event in greater multiplicity? If not, we need to find a set of utility rules that cause the AI to behave properly.

Comment by PlatypusNinja on The Value of Nature and Old Books · 2009-10-25T23:14:00.222Z · LW · GW

In modern times, some people have started to see nature more as an enemy to be conquered than as a god to be worshiped.

I've seen people argue the opposite. In ancient times, nature meant wolves and snow and parasites and drought, and you had to kill it before it killed you. Only recently have we developed the idea that nature is something to be conserved. (Because until recently, we weren't powerful enough that it mattered.)

Comment by PlatypusNinja on The Anthropic Trilemma · 2009-10-02T00:25:49.753Z · LW · GW

Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.

Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone's encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.

Comment by PlatypusNinja on The Anthropic Trilemma · 2009-09-28T21:04:05.973Z · LW · GW

I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).

My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I don't give them any weight in my decision-making algorithm.

This solution still works correctly if the N copies of me have slightly different experiences and then forget them.

Comment by PlatypusNinja on The Sword of Good · 2009-09-04T01:32:23.232Z · LW · GW

Also: it seems like a really poor plan, in the long term, for the fate of the entire plane to rest on the sanity of one dude. If Hirou kept the sword, he could maybe try to work with the wizards -- ask them to spend one day per week healing people, make sure the crops do okay, etc. Things maybe wouldn't be perfect, but at least he wouldn't be running the risk of everybody-dies.

Comment by PlatypusNinja on The Sword of Good · 2009-09-04T01:26:45.133Z · LW · GW

I think my concern about "power corrupts" is this: humans have a strong drive to improve things. We need projects, we need challenges. When this guy gets unlimited power, he's going to take two or three passes over everything and make sure everybody's happy, and then I'm worried he's going to get very, very bored. With an infinite lifespan and unlimited power, it's sort of inevitable.

What do you do, when you're omnipotent and undying, and you realize you're going mad with boredom?

Does "unlimited power" include the power to make yourself not bored?