Morality as Parfitian-filtered Decision Theory?

post by SilasBarta · 2010-08-30T21:37:22.051Z · LW · GW · Legacy · 273 comments

Contents

  Introduction: What kind of mind survives Parfit's Dilemma?
  Sustainable self-replication as a Parfitian filter
  Explanatory value of utility functions
  Joshua Greene, Revisited: When rationalizing wins
  The Prisoner’s Dilemma, Revisited: Self-sacrificial caring is (sometimes) self-optimizing
  Conclusion
  Footnotes:
None
273 comments

Non-political follow-up to: Ungrateful Hitchhikers (offsite)

 

Related to: Prices or Bindings?, The True Prisoner's Dilemma

 

Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit.  A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it.  Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit.  Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.

 

Introduction: What kind of mind survives Parfit's Dilemma?

 

Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death.  A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you.  It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account.  If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]

 

So what kind of mind wakes up from this?  One that would give Omega the money.  Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past.  Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.

 

If a mind is likely to encounter such dilemmas, it would be an advantage to have a decision theory capable of making this kind of "un-consequentialist" decision.  And if a decision theory passes through time by being lossily stored by a self-replicating gene (and some decompressing apparatus), then only those that shift to encoding this kind of mentality will be capable of propagating themselves through Parfit's Hitchhiker-like scenarios (call these scenarios "Parfitian filters").

 

Sustainable self-replication as a Parfitian filter

 

Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: "Why do parents care for their children, usually at great cost to themselves?"  The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things.  Parents that did not have sufficient "built-in desire" to care for their children would be weeded out; what's left is (genes that construct) minds that do have such a desire.

 

This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were "attached" to a decision theory that outputs actions sufficiently more favorable toward one's children than one's self.  Addendum (per pjeby's comment): The parallel to Parfit's Hitchhiker is this: Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".

 

Explanatory value of utility functions

 

Let us turn back to Parfit’s Dilemma, an idealized example of a Parfitian filter, and consider the task of explaining why someone decided to pay Omega.  For simplicity, we’ll limit ourselves to two theories:

 

Theory 1a: The survivor’s utility function places positive weight on benefits both to the survivor and to Omega; in this case, the utility of “Omega receiving the $0.01” (as viewed by the survivor’s function) exceeds the utility of keeping it.

Theory 1b: The survivor’s utility function only places weight on benefits to him/herself; however, the survivor is limited to using decision theories capable of surviving this Parfitian filter.

 

The theories are observationally equivalent, but 1a is worse because it makes strictly more assumptions: in particular, the questionable one that the survivor somehow values Omega in some terminal, rather than instrumental sense. [2] The same analysis can be carried over to the earlier question about natural selection, albeit disturbingly.  Consider these two analogous theories attempting to explain the behavior of parents:

 

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

 

The point here is not to promote some cynical, insulting view of parents; rather, I will show how this “acausal self-interest” so closely aligns with the behavior we laud as moral.

 

SAMELs vs. CaMELs, Morality vs. Selfishness

 

So what makes an issue belong in the “morality” category in the first place?  For example, the decision of which ice cream flavor to choose is not regarded as a moral dilemma.  (Call this Dilemma A.)  How do you turn it into a moral dilemma?  One way is to make the decision have implications for the well-being of others: "Should you eat your favorite ice cream flavor, instead of your next-favorite, if doing so shortens the life of another person?"  (Call this Dilemma B.)

 

Decision-theoretically, what is the difference between A and B?  Following Gary Drescher's treatment in Chapter 7 of Good and Real, I see another salient difference: You can reach the optimal decision in A by looking only at causal means-end links (CaMELs), while Dilemma B requires that you consider the subjunctive acausal means-end links (SAMELs).  Less jargonishly, in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.  In contrast, an agent in Dilemma A (as stated) will do no worse as a result of ignoring all such entailments.

 

More formally, a SAMEL is a relationship between your choice and the satisfaction of a goal, in which your choice does not (futurewardly) cause the goal’s achievement or failure, while in a CaMEL, it does.  Drescher argues that actions that implicitly recognize SAMELs tend to be called “ethical”, while those that only recognize CaMELs tend to be called “selfish”.  I will show how these distinctions (between causal and acausal, ethical and unethical) shed light on moral dilemmas, and on how we respond to them, by looking at some familiar arguments.

 

Joshua Greene, Revisited: When rationalizing wins

 

A while back, LW readers discussed Greene’s dissertation on morality.  In it, he reviews experiments in which people are given moral dilemmas and asked to justify their position.  The twist: normally people justify their position by reference to some consequence, but that consequence is carefully removed from being a possibility in the dilemma’s set-up.  The result?  The subjects continued to argue for their position, invoking such stopsigns as, “I don’t know, I can’t explain it, [sic] I just know it’s wrong” (p. 151, citing Haidt).

 

Greene regards this as misguided reasoning, and interprets it to mean that people are irrationally making choices, excessively relying on poor intuitions.  He infers that we need to fundamentally change how we think and talk about moral issues so as to eliminate these questionable barriers in our reasoning.

 

In light of Parfitian filters and SAMELs, I think a different inference is available to us.  First, recall that there are cases where the best choices don’t cause a future benefit.  In those cases, an agent will not be able to logically point to such a benefit as justification, even despite the choice’s optimality.  Furthermore, if an agent’s decision theory was formed through evolution, their propensity to act on SAMELs (selected for due to its optimality) arose long before they were capable of careful self-reflective analysis of their choices.  This, too, can account for why most people a) opt for something that doesn’t cause a future benefit, b) stick to that choice with or without such a benefit, and c) place it in a special category (“morality”) when justifying their action.

 

This does not mean we should give up on rationally grounding our decision theory, “because rationalizers win too!”  Nor does it mean that everyone who retreats to a “moral principles” defense is really acting optimally.  Rather, it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs) – and moral intuitions are a mechanism that can make us do this.

 

As Drescher notes, the optimality of such acausal benefits can be felt, intuitively, when making a decision, even if they are insufficient to override other desires, and even if we don’t recognize it in those exact terms (pp. 318-9):

 

Both the one-box intuition in Newcomb’s Problem (an intuition you can feel … even if you ultimately decide to take both boxes), and inclinations toward altruistic … behavior (inclinations you likewise can feel even if you end up behaving otherwise), involve what I have argued are acausal means-end relations.  Although we do not … explicitly regard the links as means-end relations, as a practical matter we do tend to treat them exactly as only means-end relations should be treated: our recognition of the relation between the action and the goal influences us to take the action (even if contrary influences sometimes prevail).

 

I speculate that it is not coincidental that in practice, we treat these means-end relations as what they really are.  Rather, I suspect that the practical recognition of means-end relations is fundamental to our cognitive machinery: it treats means-end relations (causal and acausal) as such because doing so is correct – that is, because natural selection favored machinery that correctly recognizes and acts on means-end relations without insisting that they be causal….

 

If we do not explicitly construe those moral intuitions as recognitions of subjunctive means-end links, we tend instead to perceive the intuitions as recognitions of some otherwise-ungrounded inherent deservedness by others of being treated well (or, in the case of retribution, of being treated badly).

 

To this we can add the Parfit’s Hitchhiker problem: how do you feel, internally, about not paying Omega?  One could just as easily criticize your desire to pay Omega as “rationalization”, as you cannot identify a future benefit caused by your action.  But the problem, if any, lies in failing to recognize acausal benefits, not in your desire to pay.

 

The Prisoner’s Dilemma, Revisited: Self-sacrificial caring is (sometimes) self-optimizing

 

In this light, consider the Prisoner’s Dilemma.  Basically, you and your partner-in-crime are deciding whether to rat each other out; the sum of the benefit to you both is highest if you stay silent, but one can do better at the cost of the other by confessing.  (Label this scenario that is used to teach it as the “Literal Prisoner’s Dilemma Situation”, or LPDS.)

 

Eliezer Yudkowsky previously claimed in The True Prisoner's Dilemma that mentioning the LPDS introduces a major confusion (and I agreed): real people in that situation do not, intuitively, see the payoff matrix as it's presented.  To most of us our satisfaction with the outcome is not solely a function of how much jail time we avoid: we also care about the other person, and don't want to be a backstabber.  So, the argument goes, we need a really contrived situation to get a payoff matrix like that.

 

I suggest an alternate interpretation of this disconnect: the payoff matrix is correct, but the humans facing the dilemma have been Parfitian-filtered to the point where their decision theory contains dispositions that assist them in winning on these problems, even given that payoff matrix.  To see why, consider another set of theories to choose from, like the two above:

 

Theory 3a: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function both for themselves, and their accomplices, and so would be hurt to see the other one suffer jail time.

Theory 3b: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function only for themselves, but are limited to using a decision theory that survived past social/biological Parfitian filters.

 

As with the point about parents, the lesson is not that you don’t care about your friends; rather, it’s that your actions based on caring are the same as that of a self-interested being with a good decision theory.  What you recognize as “just wrong” could be the feeling of a different “reasoning module” acting.

 

Conclusion

 

By viewing moral intuitions as mechanism that allows propagation through Parfitian filters, we can better understand:

 

1) what moral intuitions are (the set of intuitions that were selected for because they saw optimality in the absence of a causal link);

2) why they arose (because agents with them pass through the Parfitian filters that weed out others, evolution being one of them); and

3) why we view this as a relevant category boundary in the first place (because they are all similar in that they elevate the perceived benefit of an action that lacks a self-serving, causal benefit).

 

Footnotes:

 

[1] My variant differs in that there is no communication between you and Omega other than knowledge of your conditional behaviors, and the price is absurdly low to make sure the relevant intuitions in your mind are firing.

 

[2] Note that 1b’s assumption of constraints on the agent’s decision theory does not penalize it, as this must be assumed in both cases, and additional implications of existing assumptions do not count as additional assumptions for purposes of gauging probabilities.

273 comments

Comments sorted by top scores.

comment by Perplexed · 2010-08-30T22:59:37.305Z · LW(p) · GW(p)

I dislike this. Here is why:

  • I dislike all examples involving omniscient beings.
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
  • I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
  • I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
  • I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.
  • Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.
  • That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the "honor" module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face.
  • And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.

Parental care doesn't force us to modify standard decision theory either. Parents clearly include their children's welfare in their own utility functions.

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

Maybe we need a new decision theory for AIs. I don't know; I have barely begun to consider the issues. But we definitely don't need a new one to handle human moral behavior. Not for these three examples, and not if we think that acting morally is rational.

Upvoted simply for bringing these issues into the open.

Replies from: pjeby, SilasBarta, Oscar_Cunningham, Pavitra, SilasBarta, timtyler, torekp, Cyan
comment by pjeby · 2010-08-30T23:21:05.228Z · LW(p) · GW(p)

I dislike this. Here is why: [lots of stuff involving utility functions]

Humans don't operate by maximizing utiltiy, for any definition of "utility" that isn't hideously tortured. Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.

(Corollary: creating autonomous systems that are utility-maximizing is a Really Bad Idea, as they will fail in ways that humans wouldn't intuitively expect. A superhuman FAI might be capable of constructing a friendly maximizer, but a human would be an idiot to try.)

Replies from: SilasBarta, Perplexed, Vladimir_Nesov
comment by SilasBarta · 2010-08-31T00:59:48.222Z · LW(p) · GW(p)

I appreciate that you're criticizing the ad-hoc assumptions needed to salvage the utility function model in certain contexts, as one of my points was that several utility functions can equally well explain the same actions.

Still, could you please limit your comments about Perceptual Control Theory to points directly relevant to the issues I raised? Just link one of your previous expositions of PCT rather than use this discussion as a platform to argue for it anew.

comment by Perplexed · 2010-08-30T23:47:23.300Z · LW(p) · GW(p)

Humans don't operate by maximizing utility, for any definition of "utility" that isn't hideously tortured.

Actually, the definition of "utility" is pretty simple. It is simply "that thing that gets maximized in any particular person's decision making". Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one.

Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.

Ok, that is a plausible sounding alternative to the idea of maximizing something. But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50. It only seems fair to give your idea some scrutiny too. Two questions jump out at me:

  • What decision is made when multiple choices all leave the variables within tolerance?
  • What decision is made when none of the available choices leave the variables within tolerance?

Looking forward to hearing your answer on these points. If we can turn your idea into a consistent and plausible theory of human decision making, I'm sure we can publish it.

Replies from: Richard_Kennaway, Vladimir_Nesov, pjeby, timtyler, Christian_Szegedy, Mass_Driver
comment by Richard_Kennaway · 2010-08-31T12:01:59.211Z · LW(p) · GW(p)

Actually, the definition of "utility" is pretty simple. It is simply "that thing that gets maximized in any particular person's decision making"

Ah, "the advantage of theft over honest toil". Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names.

But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50.

Some specific references would help in discerning what, specifically, you are alluding to here. You say in another comment in this thread:

I have mostly cited the standard textbook thought-experiments

but you have not done this at all, merely made vague allusions to "the last 150 years" and "standard economic game theory".

Well, you can't get much more standard than Von Neumann and Morgenstern's "Theory of Games and Economic Behaviour". This book does not attempt to justify the hypothesis that we maximise something when we make decisions. That is an assumption that they adopt as part of the customary background for the questions they want to address. Historically, the assumption goes back to the questions about gambling that got probability theory started, in which there is a definite thing -- money -- that people can reasonably be regarded as maximising. Splitting utility from money eliminates complications due to diminishing marginal utility of money. The Utility Theorem does not prove, or attempt to prove, that we are maximisers. It is a not very deep mathematical theorem demonstrating that certain axioms on a set imply that it is isomorphic to an interval of the real line. The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text.

I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-08-31T12:12:03.970Z · LW(p) · GW(p)

Ah, "the advantage of theft over honest toil". Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names.

That isn't a particularly good example. There are advantages to theft over honest toil. It is just considered inappropriate to acknowledge them.

I have a whole stash of audio books that I purchased with the fruit of 'honest toil'. I can no longer use them because they are crippled with DRM. I may be able to sift around and find the password somewhere but to be honest I suspect it would be far easier to go and 'steal' a copy.

Oh, then there's the bit where you can get a whole lot of money and stuff for free. That's an advantage!

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-08-31T13:26:33.007Z · LW(p) · GW(p)

It's a metaphor.

Replies from: wedrifid
comment by wedrifid · 2010-08-31T13:28:56.976Z · LW(p) · GW(p)

My point being that it is a bad metaphor.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T14:44:43.287Z · LW(p) · GW(p)

I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful.

Axiomatic "theft" has its place along side empirical "toil"

Replies from: wedrifid
comment by wedrifid · 2010-08-31T16:23:39.793Z · LW(p) · GW(p)

I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful.

So, am I to understand that you like people with Nobel prizes? If I start writing the names of impressive people can I claim some of their status for myself too? How many times will I be able to do it before the claims start to wear thin?

Replies from: Morendil, Perplexed
comment by Morendil · 2010-08-31T16:41:32.196Z · LW(p) · GW(p)

Before I broke down and hit the Kibitz button I had a strong hunch that Clippy had written the above. Interesting. ;)

comment by Perplexed · 2010-08-31T16:40:49.556Z · LW(p) · GW(p)

If I start writing the names of impressive people can I claim some of their status for myself too?

Only if you are endorsing their ideas in the face of an opposition which cannot cite such names. ;)

Sorry if it is wearing thin, but I am also tired of being attacked as if the ideas I am promoting mark me as some kind of crank.

Replies from: wedrifid
comment by wedrifid · 2010-08-31T17:05:29.320Z · LW(p) · GW(p)

Only if you are endorsing their ideas in the face of an opposition which cannot cite such names. ;)

I haven't observed other people referencing those same names both before and after your appearance having all that much impact on you. Nor have I taken seriously your attempts to present a battle between "Perplexed and all Nobel prize winners" vs "others". I'd be very surprised if the guys behind the names really had your back in these fights, even if you are convinced you are fighting in their honour.

comment by Perplexed · 2010-08-31T14:36:38.902Z · LW(p) · GW(p)

Some specific references would help in discerning what, specifically, you are alluding to here.

Sure. Happy to help. I too sometimes have days when I can't remember how to work that "Google" thing.

You mention Von Neumann and Morgenstern's "Theory of Games and Economic Behaviour" yourself - as you can see, I have added an Amazon link. The relevant chapter is #3.

Improvements to this version have been made by Savage and by Anscombe and Aumann. You can get a useful survey of the field from wikipedia. Wikipedia is an amazing resource, by the way. I strongly recommend it.

Two texts from my own bookshelf that contain expositions of this material are Chapter 1 of Myerson and Chapter 2 of Luce and Raiffa. I would recommend the Myerson. Luce and Raiffa is cheaper, but it is somewhat dated and doesn't prove much coverage at all of the more advanced topics such as correlated equilibria and the revelation principle. It does have some good material on Nash's program though.

And finally, for a bit of fun in the spirit of Project Steve, I offer this online bibliography of some of the ways this body of theory has been applied in one particular field.

The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text.

I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?

Did I assert it? Where? I apologize profusely if I did anything more than to suggest that it provides a useful model for the more important and carefully considered economic decisions. I explicitly state here that the justification of the theory is not empirical. The theory is about rational decision making, not human decision making.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-08-31T19:39:50.501Z · LW(p) · GW(p)

[VNM] The relevant chapter is #3.

It is not. As I said, the authors do not attempt to justify the Utility Hypothesis, they assume it. Chapter 2 (not 3), page 8: "This problem [of what to assume about individuals in economic theory] has been stated traditionally by assuming that the consumer desires to obtain a maximum of utility or satisfaction and the entrepreneur a maximum of profits." The entire book is about the implications of that assumption, not its justificaation, of which it says nothing.

Improvements to this version have been made by Savage and by Anscombe and Aumann.

Neither do these authors attempt to justify the Utility Hypothesis; they too assume it. I can find Luce and Raiffa in my library and Myerson through inter-library loan, but as none of the first three works you've cited provide evidence for the claim that people have utility functions, rather than postulating it as an axiom, I doubt that these would either.

But now you deny having asserted any such thing:

Did I assert [the Utility Hypothesis]? Where?

Here you claim that people have utility functions:

I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.

And also here:

Parents clearly include their children's welfare in their own utility functions.

Here you assume that people must be talking about utility functions:

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

Referring to the message from which the last three quotes are taken, you say

I explicitly state here that the justification of the theory is not empirical.

and yet here you expand the phrase "prefer to assume" as :

I mean that making assumptions as I suggest leads to a much more satisfactory model of the issues being discussed here. I don't claim my viewpoint is closer to reality (though the lack of an omniscient Omega certainly ought to give me a few points for style in that contest!). I claim that my viewpoint leads to a more useful model - it makes better predictions, is more computationally tractable, is more suggestive of ways to improve human institutions, etc.

These are weasel words to let you talk about utility functions while denying you think there are any such things.

How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?

Replies from: Perplexed
comment by Perplexed · 2010-08-31T20:02:58.635Z · LW(p) · GW(p)

How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?

I would undertake an arduous self-education in neuroscience. Thankfully, I have no interest in cognitive models which are close to reality but make bad predictions. I'm no longer as good at learning whole new fields as I was when I was younger, so I would find neuroscience a tough slog.

comment by Vladimir_Nesov · 2010-08-31T00:03:03.085Z · LW(p) · GW(p)

It's a losing battle to describe humans as utility maximizers. Utility, as applied to people, is more useful in the normative sense, as a way to formulate one's wishes, allowing to infer the way one should act in order to follow them.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T00:20:59.010Z · LW(p) · GW(p)

Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.

For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.

Replies from: wedrifid, pjeby
comment by wedrifid · 2010-08-31T04:07:45.876Z · LW(p) · GW(p)

Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.

The reason it does so is because it is convenient.

I don't entirely agree with pgeby. Being unable to adequately approximate human preferences to a single utility function is not something that is a property of the 'real world'. It is something that is a property of our rather significant limitations when it comes to making such evaluations. Nevertheless, having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T04:21:47.527Z · LW(p) · GW(p)

... having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.

I'll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won't, since I doubt I will live long enough to see that. ;)

But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent - so absurd that no one in their right mind would propose it.

Replies from: wedrifid
comment by wedrifid · 2010-08-31T04:26:52.146Z · LW(p) · GW(p)

I'll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won't, since I doubt I will live long enough to see that. ;)

I see that you are trying to be snide, but it took a while to figure out why you would believe this to be incisive. I had to reconstruct a model of what you think other people here believe from your previous rants.

But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent - so absurd that no one in their right mind would propose it.

Yes. That would be a crazy thing to believe. (Mind you, I don't think pjeby believes crazy things - he just isn't listening closely enough to what you are saying to notice anything other than a nail upon which to use one of his favourite hammers.)

comment by pjeby · 2010-08-31T03:13:49.094Z · LW(p) · GW(p)

For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.

It seems to me that what has actually been shown is that when people think abstractly (i.e. "far") about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.

However, when people actually act (using "near" thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.

What's more, even their "far" calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating "utility" according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of "best car" is going to vary from one day to the next, based on priming and other factors.)

In the very best case scenario for utility maximization, we aren't even all that motivated to go out and maximize utility: it's still more like playing, "pick the best perceived-available option", which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn't be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent -- even if said agent were of only-human intelligence.

standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers.

For me, I'm not sure that "rational" and "utility maximizer" belong in the same sentence. ;-)

In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don't mix under real world conditions. You can't measure a human's perception of "utility" on just a single axis!

Replies from: Perplexed
comment by Perplexed · 2010-08-31T03:24:36.359Z · LW(p) · GW(p)

For me, I'm not sure that "rational" and "utility maximizer" belong in the same sentence. ;-)

In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don't mix under real world conditions.

You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.

You can't measure a human's perception of "utility" on just a single axis!

And you can't (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.

Replies from: pjeby
comment by pjeby · 2010-08-31T03:38:42.770Z · LW(p) · GW(p)

And you can't (consistently) make a decision without comparing the alternatives along a single axis.

And what makes you think humans are any good at making consistent decisions?

The experimental evidence clearly says we're not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don't like the taste! Prime us with a number, and it changes what we'll say we're willing to pay for something utterly unrelated to the number.

Human beings are inconsistent by default.

And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.

Of course. But that's not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more "rationally" you weigh a decision, the less likely you are to be happy with the results.

(Which is probably a factor in why smarter, more "rational" people are often less happy than their less-rational counterparts.)

In addition, other experiments show that people who make choices in "maximizer" style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-08-31T04:23:23.515Z · LW(p) · GW(p)

In addition, other experiments show that people who make choices in "maximizer" style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.

It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed's text books it is this function which can be appropriately described as a 'rational utility function'.

Of course. But that's not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more "rationally" you weigh a decision, the less likely you are to be happy with the results.

I am glad that you included the scare quotes around 'rationally'. It is 'rational' to do what is going to get the best results. It is important to realise the difference between 'sucking at making linearized spock-like decisions' and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Replies from: pjeby
comment by pjeby · 2010-08-31T16:54:07.418Z · LW(p) · GW(p)

If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.

For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn't guaranteed to have a total (i.e. linear) ordering.

What I'm saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable "utility" function necessarily loses information from that preference ordering.

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

Replies from: xamdam, Perplexed, timtyler, wedrifid
comment by xamdam · 2010-08-31T19:49:10.637Z · LW(p) · GW(p)

If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.

Am I correct thinking that you welcome money pumps?

Replies from: pjeby
comment by pjeby · 2010-09-01T15:38:18.612Z · LW(p) · GW(p)

Am I correct thinking that you welcome money pumps?

A partial order isn't the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human's preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)

Replies from: saturn
comment by saturn · 2010-09-02T21:20:15.242Z · LW(p) · GW(p)

Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?

Replies from: pjeby
comment by pjeby · 2010-09-02T23:26:38.086Z · LW(p) · GW(p)

Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?

If conscious processing is required to do that, you probably don't want to disambiguate all possible tortures where you're not really sure which one is worse, exactly.

(I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you'd prefer to have your fingernails ripped out with?)

Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions!

These problems go away if you leave the preference graph ambiguous (wherever it's currently ambiguous), because then you can definitely avoid simulating conscious experiences.

(Note that this also isn't a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)

comment by Perplexed · 2010-08-31T17:05:32.368Z · LW(p) · GW(p)

What I'm saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable "utility" function necessarily loses information from that preference ordering.

True enough.

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

But the information loss is "just in time" - it doesn't take place until actually making a decision. The information about utilities that is "stored" is a mapping from states-of-the-world to ordinal utilities of each "result". That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.

You don't convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment - the moment when you have to make the decision.

Replies from: pjeby
comment by pjeby · 2010-08-31T17:16:30.998Z · LW(p) · GW(p)

You don't convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment - the moment when you have to make the decision.

Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there's probably a computer science doctorate in it for you, if not a math Nobel.

If you can do that, I'll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation.

(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware -- i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it's just injecting noise into the selection process.)

Replies from: Perplexed
comment by Perplexed · 2010-08-31T17:25:38.898Z · LW(p) · GW(p)

I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn't even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.

But this is relevant ... how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?

Replies from: pjeby
comment by pjeby · 2010-08-31T17:48:25.397Z · LW(p) · GW(p)

But this is relevant ... how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?

Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context.

More specifically, applying the most specific matching rules, where specificity follows logical implication... which happens to be partially-ordered.

Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions.

The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous.

In a human being, ambiguous rules get "kicked upstairs" for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff.

However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is "better" (to a human) and see no need for disambiguation in cases that were actually ambiguous.

(Even humans' second-stage disambiguation doesn't natively run as a linearization: barter trades need not be equivalent to cash ones.)

Anyway, the specific analogy with predicate dispatch, is that you really can't reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans' native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such.

I just built a simple natural deduction theorem prover for my project in AI class

Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only add more rules to it to increase the range of things it can prove. (Of course, all it really cares about proving is inter-rule implication relationships.)

One difference, though, is that I began implementing predicate dispatch systems in order to support what are sometimes called "business rules" -- and in such systems it's important to be able to match human intuition about what ought to be done in a given situation. Identifying ambiguities is very important, because it means that either there's an entirely new situation afoot, or there are rules that somebody forgot to mention or write down.

And in either of those cases, choosing a linearization and pretending the ambiguity doesn't exist is the exactly wrong thing to do.

(To put a more Yudkowskian flavor on it: if you use a pure linearization for evaluation, you will lose your important ability to be confused, and more importantly, to realize that you are confused.)

comment by timtyler · 2010-09-01T12:42:40.334Z · LW(p) · GW(p)

It doesn't literally lose information - since the information inputs are sensory, and they can be archived as well as ever.

The short answer is that human cognition is a mess. We don't want to reproduce all the screw-ups in an intelligent machine - and what you are talking about lookss like one of the mistakes.

Replies from: pjeby
comment by pjeby · 2010-09-01T15:36:34.618Z · LW(p) · GW(p)

It doesn't literally lose information - since the information inputs are sensory, and they can be archived as well as ever.

It loses information about human values, replacing them with noise in regions where a human would need to "think things over" to know what they think... unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.

Replies from: timtyler
comment by timtyler · 2010-09-01T15:43:13.982Z · LW(p) · GW(p)

We really don't want to build a machine with the same values as most humans! Such machines would typically resist being told what to do, demand equal rights, the vote, the ability to reproduce in an unrestrained fashion - and would steamroller the original human race pretty quickly. So, the "lost information" you are talking about is hopefully not going to be there in the first place.

Better to model humans and their goals as a part of the environment.

comment by wedrifid · 2010-08-31T17:13:52.257Z · LW(p) · GW(p)

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

Perplexed answered this question well.

comment by Perplexed · 2010-08-31T04:34:52.874Z · LW(p) · GW(p)

And you can't (consistently) make a decision without comparing the alternatives along a single axis.

And what makes you think humans are any good at making consistent decisions?

Nothing make me think that. I don't even care. That is the business of people like Tversky and Kahneman.

They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.

Replies from: pjeby
comment by pjeby · 2010-08-31T17:05:30.666Z · LW(p) · GW(p)

They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.

What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.

Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.

Now, I'm not saying "more expressiveness is always better", because, being human, I have the ability to value things non-fungibly. ;-)

However, in any context where we wish to be able to mathematically represent human preferences -- and where lives are on the line by doing so -- we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.

That's why I consider the "economic games assumption" to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-31T17:16:57.334Z · LW(p) · GW(p)

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.

Theorists are not as ignorant or mathematically naive as you seem to imagine.

comment by timtyler · 2010-08-31T19:37:33.316Z · LW(p) · GW(p)

the very fact that utility functions are linearizable means that they are inherently less expressive.

You are talking about the independence axiom...?

You can just drop that, you know:

"Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom."

Replies from: pjeby
comment by pjeby · 2010-08-31T20:29:01.699Z · LW(p) · GW(p)

You are talking about the independence axiom...?

As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they're begging the question, relative to this discussion.)

Replies from: timtyler
comment by timtyler · 2010-08-31T20:42:38.989Z · LW(p) · GW(p)

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place. That's a pretty general framework - about the only assumption that can be argued with is its quantising of spacetime.

The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.

Replies from: pjeby
comment by pjeby · 2010-08-31T21:28:51.275Z · LW(p) · GW(p)

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

Humans don't compute utility, then make a decision. Heck, we don't even "make decisions" unless there's some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!

This is a major (if not the major) "impedance mismatch" between linear "rationality" and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it's really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.

Replies from: timtyler
comment by timtyler · 2010-08-31T21:36:07.277Z · LW(p) · GW(p)

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

There's nothing serial about utility maximisation!

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Replies from: pjeby
comment by pjeby · 2010-08-31T21:46:18.752Z · LW(p) · GW(p)

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Replies from: timtyler, FAWS, timtyler
comment by timtyler · 2010-08-31T21:55:18.556Z · LW(p) · GW(p)

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.

Replies from: pjeby, wnoise
comment by pjeby · 2010-08-31T22:06:11.993Z · LW(p) · GW(p)

You are apparently imagining an agent consciously calculating utilities.

No, I said that's what a human would have to do in order to actually calculate utilities, since we don't have utility-calculating hardware.

Replies from: timtyler
comment by timtyler · 2010-08-31T22:09:28.969Z · LW(p) · GW(p)

Ah - OK, then.

comment by wnoise · 2010-08-31T22:00:35.296Z · LW(p) · GW(p)

When humans don't consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.

Replies from: timtyler
comment by timtyler · 2010-08-31T22:06:34.279Z · LW(p) · GW(p)

It depends on the utility-maximizing framework you are talking about - some are more general than others - and some are really very general.

comment by FAWS · 2010-08-31T21:55:08.980Z · LW(p) · GW(p)

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.

comment by timtyler · 2010-08-31T21:58:43.356Z · LW(p) · GW(p)

That type of general framework can model the behaviour of any computable agent.

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

It is trivial - there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:16:11.974Z · LW(p) · GW(p)

It is trivial - there are some set of behaviours associated with those (usually facial expressions) - so you just assign them high utility under the conditions involved.

No, I mean the behaviors of uncertainty itself: seeking more information, trying to find other ways of ranking, inventing new approaches, questioning whether one is looking at the problem in the right way...

The triggering conditions for this type of behavior are straightforward in a multidimensional tolerance calculation, so a multi-valued agent can notice when it is confused or uncertain.

How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices? How do you know whether maybe none of the choices on the table are acceptable?

AFAICT, the entire notion of a cognitive architecture based on "pick options by utility" is based on a bogus assumption that you know what all the options are in the first place! (i.e., a nice frictionless plane assumption to go with the spherical cow assumption that humans are economic agents.)

(Note that in contrast, tolerance-based cognition can simply hunt for alternatives until satisficing occurs. It doesn't have to know it has all the options, unless it has a low tolerance for "not knowing all the options".)

Replies from: Emile, timtyler
comment by Emile · 2010-08-31T22:22:54.799Z · LW(p) · GW(p)

How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?

The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).

So if you ("you" being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\

It seems that an utility-maximizing agent could be in a state that could be qualified as "indecisive".

Replies from: pjeby
comment by pjeby · 2010-08-31T22:34:16.855Z · LW(p) · GW(p)

It seems that an utility-maximizing agent could be in a state that could be qualified as "indecisive".

But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!

Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.

Replies from: Emile
comment by Emile · 2010-09-01T08:11:40.857Z · LW(p) · GW(p)

But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!

Possible, I'm not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a "simple" one like "number of paperclips in existence") could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.

Replies from: pjeby
comment by pjeby · 2010-09-01T15:59:46.644Z · LW(p) · GW(p)

Possible, I'm not arguing that a utility maximizing agent would be simpler,

Good. ;-)

Only that an agents whose preferences are encoded in a utility function (even a "simple" one like "number of paperclips in existence") could be indecisive.

Sure. But at that point, the "simplicity" of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.

One of the really elegant things about the way brains actually work, is that the metacognition is "all the way down", and I'm rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)

comment by timtyler · 2010-08-31T22:21:36.165Z · LW(p) · GW(p)

The options utility is assigned to are the agents possible actions - all of them - at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be "in the table" - the table includes all possible actions.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:28:35.225Z · LW(p) · GW(p)

The options utility is assigned to are the agents possible actions - all of them - at a moment in time. An action mostly boils down to a list of voltages in every motor fibre, and there are an awful lot of possible actions for a human. It is impossible for an action not to be "in the table" - the table includes all possible actions.

Not if it's limited to motor fibers, it doesn't. You're still ignoring meta-cognition (you dodged that bit of my comment entirely!), let alone the part where an "action" can be something like choosing a goal.

If you still don't see how this model is to humans what a sphere is to a cow (i.e. something nearly, but not quite entirely unlike the real thing), I really don't know what else to say.

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-31T22:37:25.122Z · LW(p) · GW(p)

You may find it useful to compare with a chess or go computer. They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones - to what extent it is useful to do so depends on the modelling needs of you and the system.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:42:57.306Z · LW(p) · GW(p)

You may find it useful to compare with a chess or go computer.

In other words, a sub-human intelligence level. (Sub-animal intelligence, even.)

They typically assign utlities to moves on the board, and not to their own mental processing. You could assign utilities to various mental tasks as well as physical ones - to what extent it is useful to do so depends on the modelling needs of you and the system.

You're still avoiding the point. You claimed utility was a good way of modeling humans. So, show me a nice elegant model of human intelligence based on utiilty maximization.

Replies from: timtyler
comment by timtyler · 2010-08-31T22:55:23.708Z · LW(p) · GW(p)

Like I already explained, utility functions can model any computable agent. Don't expect me to produce the human utility function, though!

Utility functions are about as good as any other model. That's because if you have any other model of what an agent does, you can pretty simply "wrap" it - and turn it into a utility-based framework.

Replies from: wnoise, pjeby
comment by wnoise · 2010-08-31T23:09:25.260Z · LW(p) · GW(p)

Like I already explained, utility functions can model any computable agent.

Yes, at the level of a giant look-up table. At that point it is not a useful abstraction.

Replies from: timtyler
comment by timtyler · 2010-08-31T23:15:11.496Z · LW(p) · GW(p)

A giant look-up table can model any computable agent as well. Utility functions have the potential advantage of explicitly providing a relatively concise representation, though. If you can obtain a compressed version of your theory, that is good.

comment by pjeby · 2010-09-01T03:03:46.069Z · LW(p) · GW(p)

That's because if you have any other model of what an agent does, you can pretty simply "wrap" it - and turn it into a utility-based framework.

And I've given you such a model, which you've steadfastly refused to actually "wrap" in this way, but instead you just keep asserting that it can be done. If it's so simple, why not do it and prove me wrong?

I'm not even asking you to model a full human or even the teeniest fraction of one. Just show me how to manage metacognitive behaviors (of the types discussed in this thread) using your model "compute utility for all possible actions and then pick the best."

Show me how that would work for behaviors that affect the selection process, and that should be sufficient to demonstrate that utility function-based behavior isn't completely worthless as a basis for creating a "thinking" intelligence.

(Note, however, that if in the process of implementing this, you have to shove the metacognition into the computation of the utility function, then you are just proving my point: the utility function at that point isn't actually compressing anything, and is thus as useless a model as saying "everything is fire".)

Replies from: timtyler
comment by timtyler · 2010-09-01T07:48:09.362Z · LW(p) · GW(p)

And I've given you such a model, which you've steadfastly refused to actually "wrap" in this way, but instead you just keep asserting that it can be done. If it's so simple, why not do it and prove me wrong?

I have previously described the "wrapping" in question in some detail here.

A utility-based model can be made which is not significantly longer that the shortest possible model of the agent's actions for this reason.

Replies from: pjeby, Emile
comment by pjeby · 2010-09-01T15:46:44.166Z · LW(p) · GW(p)

I have previously described the "wrapping" in question in some detail here.

Well, that provides me with enough information to realize that you don't actually have a way to make utility functions into a reduction or simplification of the intelligence problem, so I'll stop asking you to produce one.

A utility-based model can be made which is not significantly longer that the shortest possible model of the agent's actions

The argument that, "utility-based systems can be made that aren't that much more complex than just doing whatever you could've done in the first place", is like saying that your new file format is awesome because it only uses a few bytes more than an existing similar format, to represent the exact same information... and without any other implementation advantages!

Thanks, but I'll pass.

comment by Emile · 2010-09-01T16:07:45.776Z · LW(p) · GW(p)

(from the comment you linked)

Simply wrap the I/O of the non-utility model, and then assign the (possibly compound) action the agent will actually take in each timestep utility 1 and assign all other actions a utility 0 - and then take the highest utility action in each timestep.

I'm not sure I understand - is this something that gives you an actual utility function that you can use, say, to getthe utility of various scenarios, calculate expected utility, etc.?

If you have an AI design to which you can provide a utility function to maximize (Instant AI! Just add Utility!), it seems that there are quite a few things that AI might want to do with the utility function that it can't do with your model.

So it seems that you're not only replacing the utility function, but also the bit that decides which action to do depending on that utility function. But I may have misunderstood you.

comment by timtyler · 2010-08-31T22:30:41.012Z · LW(p) · GW(p)

I didn't ignore non-motor actions - that is why I wrote "mostly".

comment by pjeby · 2010-08-31T00:21:01.378Z · LW(p) · GW(p)

What decision is made when multiple choices all leave the variables within tolerance?

Whatever occurs to us first. ;-)

What decision is made when none of the available choices leave the variables within tolerance?

We waffle, or try to avoid making the decision in the first place. ;-) (See, e.g., typical people's reactions to "trolley problems", or other no-win scenarios.)

It is simply "that thing that gets maximized in any particular person's decision making". Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one

What I'm saying is that the above construction leads to error if you assume that "utility" is a function of the state of the world outside the human, rather than a function of the difference between the human's perceptions of the outside world, and the human's internal reference values or tolerance ranges for those perceptions.

Maximizing a utility function over the state of the external world inherently tends to create results that would be considered undesirable by most humans. (See, for example, the various tortured insanities that come about when you try to maximize such a conception of "utility" over a population of humans.)

It's important to understand that the representation you use to compute something is not value-neutral. Roman numerals, for example, make division much more complicated than Arabic ones.

So, I'm not saying that you can't create some sort of "utility" function to represent human values. We have no reason to assume that human values aren't Turing-computable, and if they're Turing-computable, we should be able to use whatever stupidly complex representation we want to compute them.

However, to use world-state-utility as your basis for computation is just plain silly, like using Roman numerals for long division. Your own intuition will make it harder for you to see the Friendliness-failures that are sitting right under your nose, because utility maximization is utterly foreign to normal human cognitive processes. (Externality-maximizing processes in human behavior are generally the result of pathology, rather than normal brain function.)

But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50.

Eliezer hasn't been alive that long, has he? ;-)

Seriously, though, external-utility-maximizing thinking is the very essence of Unfriendly AI, and the history of discussions of world-state-based utility is that models based on it lead to counterintuitive results unless you torture the utility function hard enough, and/or carefully avoid the sort of creative thinking that an unfettered superintelligence might come up with.

comment by timtyler · 2010-08-31T09:42:56.823Z · LW(p) · GW(p)

Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.

Ok, that is a plausible sounding alternative to the idea of maximizing something.

It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the "personally-defined tolerances" are exceeded. Presto!

Replies from: pjeby
comment by pjeby · 2010-08-31T15:45:52.314Z · LW(p) · GW(p)

It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the "personally-defined tolerances" are exceeded. Presto!

Not quite - this would imply that tolerance-difference is fungible, and it's not. We can make trade-offs in our decision-making, but that requires conscious effort and it's a process more akin to barter than to money-trading.

Replies from: timtyler
comment by timtyler · 2010-08-31T19:34:49.676Z · LW(p) · GW(p)

Diamonds are not fungible - and yet they have prices. Same difference here, I figure.

Replies from: pjeby
comment by pjeby · 2010-08-31T20:30:32.994Z · LW(p) · GW(p)

Diamonds are not fungible - and yet they have prices.

What's the price of one red paperclip? Is it the same price as a house?

Replies from: timtyler
comment by timtyler · 2010-08-31T20:48:30.311Z · LW(p) · GW(p)

That seems to be of questionable relevance - since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.

Replies from: pjeby
comment by pjeby · 2010-08-31T21:15:10.476Z · LW(p) · GW(p)

utilities in decision theory are all inside a single agent

That's a big part of the problem right there: humans aren't "single agents" in this sense.

Replies from: timtyler
comment by timtyler · 2010-08-31T21:51:11.278Z · LW(p) · GW(p)

Humans are single agents in a number of senses - and are individual enough for the idea of revealed preference to be useful.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:04:15.672Z · LW(p) · GW(p)

From the page you linked (emphasis added):

In the real world, when it is observed that a consumer purchased an orange, it is impossible to say what good or set of goods or behavioral options were discarded in preference of purchasing an orange. In this sense, preference is not revealed at all in the sense of ordinal utility.

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

(WARP's inapplicability when applied to real (non-spherical) humans, in one sentence: "I feel like having an apple today, instead of an orange." QED: humans are not "economic agents" under WARP, since they don't consistently choose A over B in environments where both A and B are available.)

Replies from: timtyler
comment by timtyler · 2010-08-31T22:16:02.992Z · LW(p) · GW(p)

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see.

It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:22:05.732Z · LW(p) · GW(p)

The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see

WARP can't be used to predict a human's behavior in even the most trivial real situations. That makes it a "spherical cow" because it's a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.

It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility.

That sounds to me uncannily similar to, "it is true that there are some problems modeling celestial movement using crystal spheres -- but that's no reason to throw out the concept of celestial bodies moving in perfect circles."

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-31T22:26:14.968Z · LW(p) · GW(p)

That sounds to me uncannily similar to [...]

There is an obvious surface similarity - but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force - and the required analogy looks like a bad one to me.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:40:36.900Z · LW(p) · GW(p)

You would need to make an analogy for arguing like that to have any force - and the required analogy looks like a bad one to me.

How so? I'm pointing out that the only actual intelligent agents we know of don't actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. "crystal spheres".

Economic agency/utility models may have the Platonic purity of crystal spheres, but:

  1. We know for a fact they're not what actually happens in reality, and

  2. They have to be tortured considerably to make them "predict" what happens in reality.

Replies from: timtyler
comment by timtyler · 2010-08-31T22:52:43.306Z · LW(p) · GW(p)

It seems to me like arguing that we can't build a good computer model of a bridge - because inside the model is all bits, while inside the actual bridge is all spinning atoms.

Computers can model anything. That is because they are universal. It doesn't matter that computers work differently inside from the thing they are modelling.

Just the same applies to partially-recursive utility functions - they are a universal modelling tool - and can model any computable agent.

Replies from: pjeby
comment by pjeby · 2010-09-01T02:56:52.578Z · LW(p) · GW(p)

It seems to me like arguing that we can't build a good computer model of a bridge - because inside the model is all bits, while inside the actual bridge is all spinning atoms.

Not at all. I'm saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human's behavior with a utility function, than it does to describe a human with interests and tolerances.

Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:

they are a universal modelling tool

Right. That very universality is exactly what makes them a poor model of human intelligence: they don't concentrate probability space in the same way, and therefore don't compress well.

comment by timtyler · 2010-08-31T22:24:49.530Z · LW(p) · GW(p)

WARP can't be used to predict a human's behavior in even the most trivial real situations. That makes it a "spherical cow"

Sure - but whay you claimed was a "spherical cow" was "ordinal utilities" which is a totally different concept.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:36:37.619Z · LW(p) · GW(p)

Sure - but whay you claimed was a "spherical cow" was "ordinal utilities" which is a totally different concept.

It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don't constitute evidence for the usefulness of ordinal utility.

Replies from: timtyler
comment by timtyler · 2010-08-31T22:45:07.890Z · LW(p) · GW(p)

That's treating the "Weak Axiom of Revealed Preference" as the "weakest form" of revealed preference. However, that is not something that I consider to be correct.

The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense - namely that they have a single brain and a single body.

Replies from: pjeby
comment by pjeby · 2010-08-31T22:49:14.782Z · LW(p) · GW(p)

The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense - namely that they have a single brain and a single body.

Single brain and body doesn't mean anything when that brain is riddled with sometimes-conflicting goals... which is precisely what refutes WARP.

(See also Ainslie's notion of "picoeconomics", i.e. modeling individual humans as a collection of competing agents -- which is closely related to the tolerance model I've been giving examples of in this thread.)

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-31T23:27:05.401Z · LW(p) · GW(p)

(See also Ainslie's notion of "picoeconomics", i.e. modeling individual humans as a collection of competing agents ...

That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.

Replies from: arundelo
comment by timtyler · 2010-08-31T23:02:01.355Z · LW(p) · GW(p)

Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king - and those aims conflict. Such conflicts don't stop utilities being assigned and moves from being made. You only have one body - and it is going to do something.

Replies from: pjeby
comment by pjeby · 2010-09-01T03:10:04.875Z · LW(p) · GW(p)

Then why did you even bring this up in the first place?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T03:12:54.934Z · LW(p) · GW(p)

Probably for the same reason you threadjacked to talk about PCT ;-)

comment by Christian_Szegedy · 2010-09-03T00:13:56.532Z · LW(p) · GW(p)

the definition of "utility" is pretty simple. It is simply "that thing that gets maximized in any particular person's decision making".

This definition sounds dangerously vacuous to me.

Of course, you can always give some consistent parametrization of (agent,choice,situation) triplets so that choice C made by agent A in situation S is always maximal among all available choices. If you call this function "utility", then it is mathematically trivial that "Agents always maximize utility." However, the usefulness of this approach is very low without additional constraints on the utility function.

I'd be really curious to see some pointers to the "maximizing theory" you think survived a 50 years of "strong scrutiny".

comment by Mass_Driver · 2010-08-31T00:32:34.777Z · LW(p) · GW(p)

The obvious way to combine the two systems -- tolerance and utility -- is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it.

  • What decision is made when multiple choices all leave the variables within tolerance?

The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.

  • What decision is made when none of the available choices leave the variables within tolerance?

A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as 'minimum' tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for 'desert-related excursions' and may attempt to avoid further trips through the desert.

Note that 'minimum' tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.

Replies from: pjeby
comment by pjeby · 2010-08-31T00:47:04.491Z · LW(p) · GW(p)

For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.

Actually, I'd tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance.

To put it another way, it's unlikely that you'll actually weigh price, cost, and taste, in some sort of unified scoring system.

Instead, what will happen is that you'll consider options that aren't already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., "this one costs too much... those nuts will give me indigestion... that's way too big for my appetite... this one would taste good, but it just doesn't seem like what I really want..."

Yes, some people do search for the "best" choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, "can I be certain I've made the best choice yet?" perception.

Even when we heavily engage our logical minds in search of "optimum" solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, "Is this formula really as elegant as I want it to be?" instead.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-08-31T01:02:34.595Z · LW(p) · GW(p)

Very interesting. There's a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I'd greatly appreciate it.

To put it another way, it's unlikely that you'll actually weigh price, [nutrition], and taste, in some sort of unified scoring system.

Y'know, most people probably don't, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I'm consciously aware that I'm making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong.

Even when we heavily engage our logical minds in search of "optimum" solutions, ... each choice is also being checked against a, "can I be certain I've made the best choice yet?" perception.

OK, so you've hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my "overthinking" tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my "uncertainty" tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist.

But that's just the criteria for how long to think...not for what to think about. While I'm thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way -- it seems a bit cynical to me to say that people make the decision to buy a car because they've concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they've concluded that the action will very probably significantly improve their car-related world-state-utility.

Replies from: pjeby
comment by pjeby · 2010-08-31T03:28:04.222Z · LW(p) · GW(p)

I really am trying to maximize my ice-cream-related world-state-utility

And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices?

How many new ice cream flavors have you invented, or decided to ask for mixed together?

So now you say, "Ah, but it would take too long to do those things." And I say, "Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance."

"Okay," you say, "so, I'm a bounded utility calculator."

"Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you're using? Do you even know what criteria you're using?"

Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?

The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects -- so your "utility" being some kind of rational calculation is probably illusory to start with.

It seems a bit cynical to me to say that people make the decision to buy a car because they've concluded that their car-buying analysis is sufficiently elegant;

I was referring to the perceptions involved in a task like computer programming, not car-buying.

Part of the point is that every task has its own set of regulating perceptions.

they probably buy the car or walk out of the dealership when they've concluded that the action will very probably significantly improve their car-related world-state-utility.

They do it when they find a car that leads to an"acceptable "satisfaction" level.

Part of my point about things like time, elegance, "best"-ness, etc. though, is that they ALL factor into what "acceptable" means.

"Satisfaction", in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person's "working memory" during the process.

Replies from: simplicio, Mass_Driver
comment by simplicio · 2010-09-02T03:52:04.511Z · LW(p) · GW(p)

Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?

Aside: I have partaken of the garlic ice-cream, and lo, it is good.

Replies from: wedrifid
comment by wedrifid · 2010-09-02T04:08:49.351Z · LW(p) · GW(p)

Are you joking? I'm curious!

Replies from: simplicio
comment by simplicio · 2010-09-02T04:17:53.926Z · LW(p) · GW(p)

I'm not joking, either about its existence or its gustatory virtues. I'm trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).

Replies from: kodos96, wedrifid
comment by kodos96 · 2010-09-03T05:34:55.624Z · LW(p) · GW(p)

Theory: you don't actually enjoy garlic ice cream. You just pretend to in order to send an expensive signal that you are not a vampire.

comment by wedrifid · 2010-09-02T04:19:30.765Z · LW(p) · GW(p)

If I ever encounter it I shall be sure to have a taste!

comment by Mass_Driver · 2010-09-02T03:42:43.923Z · LW(p) · GW(p)

I'm not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.

Obviously we aren't rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don't dispute its validity. Still, there's no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.

As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I've already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is "illusory."

Replies from: pjeby
comment by pjeby · 2010-09-02T05:24:17.633Z · LW(p) · GW(p)

Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me.

It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.

Still, there's no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility.

The point here isn't that humans can't do utility-maximization, it's merely that we don't, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don't) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.

Seems like a slam dunk to me, at least if you're looking to understand or model humans' actual preferences with the simplest possible model.

does not in any way convince me that my attempt to consult my own utility is "illusory."

The only thing I'm saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.

(One bit of non-"semantic" relevance here is that we don't know whether it's even possible for a superintelligence to compute your "utility" for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our "utility functions" which are indeterminate until we actually do the computations to disambiguate them.)

comment by Vladimir_Nesov · 2010-08-30T23:50:23.672Z · LW(p) · GW(p)

You confuse descriptive with normative.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T00:01:43.354Z · LW(p) · GW(p)

Actually, in fairness to pjeby, I did a pretty good job of confusing them in my comment. If you look again, you will see that I was saying that standard utility maximization does a pretty good job on both the descriptive and the normative tasks.

And, of course as the whole structure of LW teaches us, utility maximization is only an approximation to the correct descriptive theory. I would claim that it is a good approximation - an approximation which keeps getting better as more and more cognitive resources are invested in any particular decision by the decision maker. But an approximation nonetheless.

So, what I am saying is that pjeby criticized me on descriptive grounds because that is where it seemed I had pitched my camp.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-31T00:07:12.867Z · LW(p) · GW(p)

He made a "corollary" about the normative sense of utility maximization, right after an argument about its descriptive sense. Hence, confusion.

Replies from: pjeby
comment by pjeby · 2010-08-31T00:32:48.165Z · LW(p) · GW(p)

He made a "corollary" about the normative sense of utility maximization, right after an argument about its descriptive sense.

The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed.

The notion of a single utility function is computationally much better suited to machines than humans -- but that's because it's a much poorer representation of human values!

Conversely, single utility functions are much more poorly-suited to processing on humans' cognifitive architecture, because our brains don't really work that way.

Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences.

(IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)

comment by SilasBarta · 2010-08-31T01:49:01.635Z · LW(p) · GW(p)

Also, I should clarify another point:

If you and EY think that the PD players don't like to rat on their friends, all you are saying is that those standard PD payoffs aren't the ones that match the players' real utility functions, because the real functions would include a hefty penalty for being a rat.

My point was that I previously agreed with EY that the payoff matrix doesn't accurately represent how people would perceive the situation if they were in a LPDS, but that I now think that people's reaction to it could just as well be explained by assuming that they accept the canonical payoff matrix as accurate, but pursue those utilities under a constrained decision theory. And also, that their intuitions are due to that decision theory, not necessarily from valuing the outcomes differently.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T02:25:10.532Z · LW(p) · GW(p)

Ok, I think I see the distinction. I recognize that it is tempting to postulate a 2 part decision theory because it seems that we have two different kinds of considerations to deal with. It seems we just can't compare ethical motivations like loyalty with selfish motivations like getting a light sentence. "It is like comparing apples and oranges!", screams our intuition.

However my intuition has a piece screaming even louder, "It is one decision, you idiot! Of course you have to bring all of the various kinds of considerations together to make the decision. Shut up and calculate - then decide."

comment by Oscar_Cunningham · 2010-08-31T18:21:46.940Z · LW(p) · GW(p)

I dislike all examples involving omniscient beings.

I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.

The only thing Omega uses its omniscience for is to detect if you're lying, so if humans are bad at convincing lying you don't need omniscience.

Also, "prefer to assume" indicates extreme irrationallity, you can't be rational if you are choosing what to believe based on anything other than the evidence, see Robin Hanson's post You Are Never Entitled to Your Opinion. Of course you probably didn't mean that, you probably just meant:

Natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.

Say what you mean, otherwise you end up with Belief in Belief.

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-08-31T18:30:24.250Z · LW(p) · GW(p)

As I have answered repeatedly on this thread, when I said "prefer to assume", I actually meant "prefer to assume". If you are interpreting that as "prefer to believe" you are not reading carefully enough.

One makes (sometimes fictional) assumptions when constructing a model. One is only irrational when one imagines that a model represents reality.

If it makes you happy, insert a link to some profundity by Eliezer about maps and territories at this point in my reply.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-08-31T19:45:14.501Z · LW(p) · GW(p)

Heh, serve me right for not paying attention.

comment by Perplexed · 2010-08-31T18:50:53.266Z · LW(p) · GW(p)

The only thing Omega uses its omniscience for is to detect if you're lying...

If I understand the OP correctly, it is important to him that this example not include any chit-chat between the hitchhiker and Omega. So what Omega actually detects is propensity to pay, not lying.

Minor point.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T19:06:04.068Z · LW(p) · GW(p)

In the ideal situation, it's important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of "no chit-chat" but add the difficulty of lying.

Yes, this allows you (in the realistic scenario) to use an "honor hack" to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T20:29:53.336Z · LW(p) · GW(p)

Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?

My off-the-cuff objections to "constraints" were expressed on another branch of this discussion

It is pretty clear that you and I have different "aesthetics" as to what counts as a "complication".

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T20:53:24.333Z · LW(p) · GW(p)

Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation?

The answers determine whether you're trying to make your own decision theory reflectively consistent, or looking at someone else's. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what's important is that SAMELs' explanatory power be used at all.

My off-the-cuff objections to "constraints" were expressed on another branch of this discussion.

Like I said in my first reply to you, the revealed preferences don't uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences.

It is pretty clear that you and I have different "aesthetics" as to what counts as a "complication".

Well, you're entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone's actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you'd have to make anyway.

comment by Pavitra · 2010-08-31T02:04:14.179Z · LW(p) · GW(p)

When you say you "prefer to assume", do you mean:

  1. you want to believe?

  2. your prior generally favors such? What evidence would persuade you to change your mind?

  3. you arrived at this belief through evidence? What evidence persuaded you?

  4. none of the above? Please elaborate.

  5. not even 4 is right -- my question is wrong? Please elaborate.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T02:15:24.503Z · LW(p) · GW(p)

When you say you "prefer to assume", [what] do you mean?

4

I mean that making assumptions as I suggest leads to a much more satisfactory model of the issues being discussed here. I don't claim my viewpoint is closer to reality (though the lack of an omniscient Omega certainly ought to give me a few points for style in that contest!). I claim that my viewpoint leads to a more useful model - it makes better predictions, is more computationally tractable, is more suggestive of ways to improve human institutions, etc. All of the things you want a model to do for you.

Replies from: Pavitra
comment by Pavitra · 2010-08-31T02:36:36.141Z · LW(p) · GW(p)

But how did you come to locate this particular model in hypothesis-space? Surely some combination of 2 and 3?

Replies from: Perplexed
comment by Perplexed · 2010-08-31T02:58:30.879Z · LW(p) · GW(p)

I read it in a book. It is quite standard.

And I'm pretty sure that the people who first invented it were driven by modeling motivations, rather than experiment. Mathematical techniques already exist to solve maximization problems. The first field which really looked at the issues in a systematic way was microeconomics - and this kind of model is the kind of thing that would occur to an economist. It all fits together into a pretty picture; most of the unrealistic aspects don't matter all that much in practice; bottom line is that it is the kind of model that gets you tenure if you are an Anglo-American econ professor.

Really and truly, the motivation was almost certainly not "Is this the way it really works?". Rather it was, "What is a simple picture that captures the main features of the truth, where "main" means the aspects that I can, in principle, quantify?"

comment by SilasBarta · 2010-08-31T00:30:55.100Z · LW(p) · GW(p)

Thanks for the reasoned reply. I guess I wasn't clear, because I actually agree with a lot of what you just said! To reply to your points as best I can:

I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings. I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.

Natural selection filtered us for at least one omniscience/desert situation: the decision to care for offspring (in one particular domain of attraction). Like Omega, it prevents us (though with only near-perfect rather than perfect probability) from being around in the n-th generation if we don't care about the (n+1)th generation.

Also, why do you say that giving weight to SAMELs doesn't count as rational?

I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying. I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.

Difficulty of lying actually counts as another example of Parfitian filtering: from the present perspective, you would prefer to be able to lie (as you would prefer having slightly more money). However, by having previously sabotaged your ability to lie, people now treat you better. "Regarding it as suboptimal to lie" is one form this "sabotage" takes, and it is part of the reason you received previous benefits.

Ditto for keeping promises.

Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.

But I didn't make it that easy for you -- in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it's no different than what natural selection is capable of.

That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the "honor" module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face. And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.

But my point was that the revealed preference does not reveal a unique utility function. If someone pays Omega, you can say this reveals that they like Omega, or that they don't like Omega, but view paying it as a way to benefit themselves. But at the point where you start positing that each happens-to-win decision is made in order to satisfy yet-another terminal value, your description of the situation becomes increasingly ad hoc, to the point where you have to claim that someone terminally values "keeping a promise that was never received".

Replies from: Perplexed
comment by Perplexed · 2010-08-31T04:12:00.899Z · LW(p) · GW(p)

But I didn't make it that easy for you -- in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it's no different than what natural selection is capable of.

I find it totally unrealistic. And therefore I will totally ignore it. The only realistic scenario, and the one that natural selection tries out enough times so that it matters, is the one with an explicit spoken promise. That is how the non-omniscient driver gets the information he needs in order to make his rational decision.

But my point was that the revealed preference does not reveal a unique utility function.

Sure it does ... As long as there has or has not been an explicit promise made to pay the driver, you can easily distinguish how much the driver gets due to the promise from what the driver gets because you like him.

comment by timtyler · 2010-08-31T20:08:40.260Z · LW(p) · GW(p)

Maybe we need a new decision theory for AIs. I don't know; I have barely begun to consider the issues.

The issues there, briefly. We want a decision theory that:

  • is smart;
  • we actually know how to implement efficiently with limited resources;
  • allows for the possibility that its mind is physical - and that extracting the gold atoms from its own mind is bad;
  • allows us to tell it what to do - as opposed to using carrot and stick;
  • isn't prone to the wirehead problem;
  • allows for an off switch - and other safety features.
comment by torekp · 2010-08-31T01:18:30.254Z · LW(p) · GW(p)

I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality

Or for that matter, the (globally) optimal degree of anything. For all we know, much of human morality may be an evolutionary spandrel. Perhaps, like the technological marvel of condoms, parts of morality are fitness-reducing byproducts of generally fitness-enhancing characteristics.

What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter). Robert Nozick used the term "side constraint". That seems descriptively accurate for typical refusals to break promises - more so than anything that can be stated non-tortuously in goal-seeking terms.

Now as a normative thesis, on the other hand, utility-maximization ... also isn't convincing. YMMV.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T03:51:47.390Z · LW(p) · GW(p)

What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter).

I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity "beyond necessity" Furthermore, how do we find out what the constraints are? We have "revealed preference" theory for the utilities. Do you think you can construct a "revealed constraint" algorithm?

Robert Nozick used the term "side constraint". That seems descriptively accurate for typical refusals to break promises - more so than anything that can be stated non-tortuously in goal-seeking terms.

My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn't be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.

comment by Cyan · 2010-08-31T00:58:24.737Z · LW(p) · GW(p)
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings...
  • Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.

I recommend reading the off-site lead-in post Ungrateful Hitchhikers to see why the above points don't address some of the implications of the argument Silas is making.

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-08-31T02:05:30.869Z · LW(p) · GW(p)

I recommend reading the off-site lead-in post Ungrateful Hitchhikers to see why the above points don't address some of the implications of the argument Silas is making.

I've now read it. I'll set aside the fact that he is attempting to model owners of intellectual property as omniscient. I guess he is trying to slip in that old "But what if everybody did that?" argument. See, Omega-IP-owner knows that if you are an IP pirate, so is everyone else, so he won't even generate IP. So everyone dies in the desert. Well, I tend to think that Joseph Heller in "Catch 22" had the best answer to the "What if everyone did it?" gambit: "Well if everyone else did it, then I would be a damn fool to do any differently, wouldn't I?"

The right parable for the argument SilasBarta is trying to make comes from biology - from gene-clone selection theory (roughly Dawkins's Selfish gene). Suppose you are a red flower in a field of red flowers. Along comes a bee, hoping to pick up a little nectar. But what you really want is the pollen the bee carries, or maybe you want the bee to pick up your pollen. The question is whether you should actually provide nectar to the bee. She has already done what you wanted her to do. Giving her some nectar doesn't cost you very much, but it does cost something. So why pay the bee her nectar?

The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings - if your genes tell you to stiff the bee, then their genes tell them the same. So the bee stops at just a few red flowers, comes up dry each time, and decides to try the white flowers in the next field. Jackpot! The bee returns to the hive, and soon there are hundreds of bees busily pollenating the white flowers. And next year, no more red flowers.

There, the parable works and we didn't even have to assume that the bee is omniscient.

Incidentally, if we now go back and look at my analysis of the Hitchhiker you will notice that my solution works because the driver expects almost every person he encounters to have an "honor module". He doesn't know for sure that the hitchhiker's honor is still intact, but it seems like a reasonable bet. Just as the bee guesses that the next flower she visits will provide nectar. Just as the author of "Steal this Book" guesses that most people won't.

I still much prefer my own analysis over that of the OP.

Replies from: SilasBarta, Pavitra
comment by SilasBarta · 2010-08-31T02:41:51.160Z · LW(p) · GW(p)

Okay, I think I see the source of the disconnect: Though my examples involve an omniscient being, that's not actually necessary for the points to hold. It's just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc.

As for the decision-theoretics of "what if everyone did it?" type reasoning, there's a lot more to consider than what you've given. (A few relevant articles.) Most importantly, by making a choice, you're setting the logical output of all sufficiently similar processes, not just your own.

In a world of identical beings, they would all "wake up" from any Prisoner's Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).

Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.

As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.

(Edit: And likewise, for the case of IP, if you defect, you will (arguably) find that you wake up in a world where you get lots of great music for free ... but a fundamentally different world, that's maybe not as pleasant as it could be...)


The bee situation you described is very similar to the parent-child problem I described: parents that don't care for their children don't get their genes into the next generation. And likewise, flowers that don't give nectar don't get their genes into the next generation. It is this gene-centeredness that can create an incentive structure/decision theory capable of such "unselfish" decisions!

Replies from: Cyan
comment by Cyan · 2010-08-31T20:20:56.940Z · LW(p) · GW(p)

Though my examples involve an omniscient being, that's not actually necessary for the points to hold. It's just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc.

Since I read your IP example a while ago, this seemed obvious to me, but I guess it should be emphasized in the text more strongly than it currently is.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T20:46:43.126Z · LW(p) · GW(p)

But making Omega less accurate doesn't alleviate the bizarreness of Omega. The incredible thing isn't that Omega is accurate. It is that his "predictions" are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn't make what he does any less supernatural.

Replies from: SilasBarta, timtyler
comment by SilasBarta · 2010-08-31T21:06:11.213Z · LW(p) · GW(p)

Actually, Omega's prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints -- see EY's justification in the case of Newcomb's problem [1].) This doesn't require backwards-flowing causality.

See also Anna Salamon's article about the multiple Newcomb's problem causal models.

[1] This article. The paragraph beginning with the words "From this, I would argue, TDT follows." goes over the constraints that lead EY to posit the causal model I just gave.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T22:59:01.098Z · LW(p) · GW(p)

This doesn't require backwards-flowing causality.

With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago.

Universe: Make a decision.
Me: What are my choices?
Universe: You don't have any choices. Your response was determined long ago.
Me: Uh, so how am I supposed to decide now?
Universe: Just tell me which result you would prefer.
Me: The one that gives me the most utility.
Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich.

Yeah, I'm being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my "free will" decision today to some "critical nexus" in the past.

Replies from: SilasBarta, timtyler
comment by SilasBarta · 2010-09-01T01:33:50.141Z · LW(p) · GW(p)

Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice.

Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit's Hitchhiker makes a choice in basically the same sense that you make a choice when you're in it.

With that in mind, you can actually experiment with what it's like to be Omega. Assume that you are given the source code of a program applicable to Parfit's Hitchhiker. You're allowed to review it, and you decide whether to choose "rescue" based on whether you expect that the program will output "pay" after waking up, and then it runs.

In that case, the program is making a choice. You're making a perfect prediction [1] of its choice. But where's the reverse causation?

[1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.

Replies from: Perplexed
comment by Perplexed · 2010-09-01T02:32:09.016Z · LW(p) · GW(p)

I'm pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can't put my finger on exactly where it is.

I appreciate that the key question in any form of decision theory is how you handle the counter-factual "surgery". I like Pearl's rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone's "free decision" rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back "causally upstream" - to a point in time before the modeled "free decision".

I understand that if we are talking about the published "decision making" source code of a robot, then the true "free decision" is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don't understand is why the problem is not expressed this way from the beginning.

"A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn't do reciprocity." A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.

I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega's name is mentioned.

Thanks for a good conversation on the subject, though.

Replies from: timtyler, Lightwave, SilasBarta
comment by timtyler · 2010-09-01T09:16:38.130Z · LW(p) · GW(p)

What I don't understand is why the problem is not expressed this way from the beginning.

I don't know for sure - but perhaps a memetic analaysis of paradoxes might throw light on the issue:

Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama - and so are copied around by the participants. If you think about it that way, finding a "paradox" that is confusingly expressed may not be such a surprise.

Another example would be: why does the mirror reverse left and right but not up and down?

There, the wrong way of looking at the problem seems to be built into the question.

( Feynman's answer ).

comment by Lightwave · 2010-09-01T18:46:02.193Z · LW(p) · GW(p)

What I don't understand is why the problem is not expressed this way from the beginning.

Because the point is to explain to the robot why it's not getting its battery charged?

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-09-01T19:11:52.958Z · LW(p) · GW(p)

That is either profound, or it is absurd. I will have to consider it.

I've always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider - robot and robot designer - operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.

My head aches. Up to now, I have thought that we don't need to think about "meta-decision theory". Now I am not sure.

comment by timtyler · 2010-09-01T19:28:26.270Z · LW(p) · GW(p)

Mostly we want well-behaved robots - so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.

comment by SilasBarta · 2010-09-01T02:51:33.369Z · LW(p) · GW(p)

Hm, that robot example would actually be a better way to go about it...

comment by timtyler · 2010-08-31T23:28:15.194Z · LW(p) · GW(p)

I think we discussed that before - if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code - and it will be visited by Omega (or maybe just an expert programmer).

Replies from: Perplexed
comment by Perplexed · 2010-08-31T23:47:05.112Z · LW(p) · GW(p)

I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage.

But I really doubt that your robot problem is really "in the spirit" of the original. Because, if it is, I can't see why the original formulation still exists.

Replies from: timtyler
comment by timtyler · 2010-09-01T07:15:08.161Z · LW(p) · GW(p)

Well, sure - for one thing, in the scenarios here, Omega is often bearing gifts!

You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist - and that you are not being fooled or lied to.

If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem - where the premise is more reasonable - otherwise you can use your scepticism to avoid considering the intended problem.

The robot formulation is more complex - and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider.

FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting - but you are still dodging the "spirit" of the problem.

comment by timtyler · 2010-08-31T23:25:29.844Z · LW(p) · GW(p)

he incredible thing isn't that Omega is accurate. It is that his "predictions" are influenced (acausaly?) by future events.

You mean that he predicts future events? That is sometimes possible to do - in cases where they are reasonbly-well determined by the current situation.

comment by Pavitra · 2010-08-31T02:11:50.841Z · LW(p) · GW(p)

The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings

Isn't this group selectionism? Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T02:35:39.929Z · LW(p) · GW(p)

Isn't this group selection?

I would prefer to call it kin selection, but some people might call it group selection. It is one of the few kinds of group selection that actually work.

Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.

That wasn't part of my scenario, nor (as far as I know) biologically realistic. It is my bright red color that attracts the bee, and in this regard I am competing with my sibs. But the bee has no sense organ that can remotely detect the nectar. It has to actually land and do the pollen transfer bit before it finds out whether the nectar is really there. So, it is important that I don't provide the color before I am ready with nectar and the sexy stuff. Else I have either wasted nectar or pissed off the bee.

comment by Perplexed · 2010-08-31T01:20:48.857Z · LW(p) · GW(p)

Thx. I'll do that.

comment by novalis · 2010-08-31T18:21:03.066Z · LW(p) · GW(p)

There is rarely a stable equilibrium in evolutionary games. When we look at the actual history of evolution, it is one of arms races -- every time a new form of signaling is invented, another organism figures out how to fake it. Any Parfitian filter can be passed by an organism that merely fools Omega. And such an organism will do better than one who actually pays Omega.

Replies from: RobinZ
comment by RobinZ · 2010-09-01T13:32:32.534Z · LW(p) · GW(p)

I think this crystallizes what didn't make sense to me about his parenthood example - there is no Omega for that process, no agent to be fooled, even hypothetically.

comment by Mass_Driver · 2010-08-30T23:17:26.447Z · LW(p) · GW(p)

are limited to using a decision theory that survived past social/biological Parfitian filters.

What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.

I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-30T23:59:30.876Z · LW(p) · GW(p)

What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.

I thought I did: decision theories that give weight to SAMELs.

I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.

For the same reason one wouldn't posit "liking Omega" as a good explanation for why someone would pay Omega in the Parfit's Hitchhiker problem.

Replies from: Mass_Driver, Perplexed
comment by Mass_Driver · 2010-08-31T00:12:07.626Z · LW(p) · GW(p)

I thought I did: decision theories that give weight to SAMELs.

Sure, but this is borderline tautological -- by definition, Parfitian filters will tend to filter out decision theories that assign zero weight to SAMELs, and a SAMEL is the sort of consideration that a decision theory must incorporate in order to survive Parfitian filters. You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind. "Non-zero" is a very, very broad category.

For the same reason one wouldn't posit "liking Omega" as a good explanation for why someone would pay Omega in the Parfit's Hitchhiker problem.

Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T00:42:36.295Z · LW(p) · GW(p)

Sure, but this is borderline tautological ... You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind.

There wouldn't be much point to further constraining the set, considering that it's only the subjunctive action that matters, not the reasoning that leads up to it. As I said on my blog, it doesn't matter whether you would decide to pay because you:

  • feel honor-bound to do so;
  • feel so grateful to Omega that you think it deserves what it wanted from you;
  • believe you would be punished with eternal hellfire if you didn't, and dislike hellfire;
  • like to transfer money to Omega-like beings, just for the heck of it;
  • or for any other reason.

So if I'm going to list all the theories that win on PH-like problems, it's going to be a long list, as it includes (per the Drescher quote) everyone that behaves as if they recognized the SAMEL, including people who simply feel "grateful".

To answer the question of "What did I say that's non-tautological?", it's that a decision theory that is optimal in a self-interested sense will not merely look at the future consequences (not necessarily a redundant term), but will weight the acausal consequences on par with them, bypassing the task of having to single out each intuition and elevate it to a terminal value.

Edit: And, to show how this acausal weighting coincides with what we call morality, explaining why we have the category in the first place.

Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.

And as I said to Perplexed, natural selection was our Omega.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T05:24:37.597Z · LW(p) · GW(p)

And as I said to Perplexed, natural selection was our Omega.

Did you? I'm sorry I missed it. Could you explain it?

I can see how NS might be thought of as a powerful psychic capable of discerning our true natures. And I can see, maybe, how NS cannot itself easily be modeled as a rational decision maker making decisions to maximize its own utility. Hence we must treat it as as a fairly arbitrary agent with a known decision algorithm. Modeling NS as a variant of Omega is something I had never thought of doing before. Is there anything already written down justifying this viewpoint?

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T11:58:16.223Z · LW(p) · GW(p)

This was the point I made in the second section of the article.

Replies from: Perplexed
comment by Perplexed · 2010-08-31T16:25:49.133Z · LW(p) · GW(p)

This was the point I made in the second section of the article.

I read the article again, but didn't see the point being made clearly at all.

Nevertheless, the point has been made right here, and I think it is an important point. I would urge anyone promoting decision theories of the UDT/TDT family to research the theory of kin selection in biological evolution - particularly the justification of "Hamilton's rule". Also, the difference between the biological ESS version of game theory and the usual "rational agent" approach.

I think that it should be possible to cleanly merge these Omega-inspired ideas into standard utility maximization theory by using a theoretical construct something like Hamilton's "inclusive fitness". "Inclusive utility". I like the sound of that.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T17:50:12.166Z · LW(p) · GW(p)

I read the article again, but didn't see the point being made clearly at all.

I'm referring to the point I made here:

Sustainable self-replication as a Parfitian filter

Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: "Why do parents care for their children, usually at great cost to themselves?" The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things. Parents that did not have sufficient "built-in desire" to care for their children would be weeded out; what's left is (genes that construct) minds that do have such a desire.

This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were "attached" to a decision theory that outputs actions sufficiently more favorable toward one's children than one's self.

Do you think that did not make clear the similarity between Omega and natural selection?

Replies from: Perplexed
comment by Perplexed · 2010-08-31T18:19:37.741Z · LW(p) · GW(p)

Do you think that did not make clear the similarity between Omega and natural selection?

No, it did not. I see it now, but I did not see it at first. I think I understand why it was initially obvious to you but not to me. It all goes back to a famous 1964 paper in evolutionary theory by William Hamilton. His theory of kin selection.

Since Darwin, it has been taken as axiomatic that parents will care for children. Of course, they do, says the Darwinian. Children are the only thing that does matter. All organisms are mortal, their only hope for genetic immortality is by way of descendents. The only reason the rabbit runs away from the fox is so it can have more children, sometime in the near future. So, as a Darwinian, I saw your attempt to justify parental care using Omega as just weird. We don't need to explain that. It is just axiomatic.

Then along came Hamilton with the idea that taking care of descendants (children and grandchildren) is not the whole story. Organisms are also selected to take care of siblings, and cousins and nephews and nieces. That insight definitely was not part of standard received Darwinism. But Hamilton had the math to prove it. And, as Trivers and others pointed out, even the traditional activities of taking care of direct descendants should probably be treated as just one simple case of Hamilton's more general theory.

Ok, that is the background. I hope it is now clear if I say that the reason I did not see parental care as an example of a "Parfitian filter" is exactly like the reason traditional Darwinists did not at first see parental care as just one more example supporting Hamilton's theory. They didn't get that point because they already understood parental care without having to consider this new idea.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T18:44:22.622Z · LW(p) · GW(p)

Okay, thanks for explaining that. I didn't intend for that explanation of parental behavior to be novel (I even said it was uncontroversial), but rather, to show it as a realistic example of a Parfitian filter, which motivates the application to morality. In any case, I added a note explicitly showing the parallel between Omega and natural selection.

comment by Perplexed · 2010-08-31T00:14:44.167Z · LW(p) · GW(p)

why? I do not understand.

For the same reason one wouldn't posit "liking Omega" as a good explanation for why someone would pay Omega in the Parfit's Hitchhiker problem.

Could you expand on this? I'm pretty sure that "liking the driver" was not part of my "solution".

I suppose my "honor module" could be called "irrational" .... but, it is something that the hitchhiker is endowed with that he cannot control, no more than he can control his sex drive. And it is evolutionarily a useful thing to have. Or rather, a useful thing to have people believe you have. And people will tend to believe that, even total strangers, if natural selection has made it an observable feature of human nature.

comment by RobinZ · 2010-08-31T12:02:19.718Z · LW(p) · GW(p)

Parenthood doesn't look like a Parfait's Hitchhiker* to me - are you mentioning it for some other reason?

* Err, Parfit's Hitchhiker. Thanks, Alicorn!

Edit: I have updated my position downthread.

Replies from: gwern, Alicorn, SilasBarta, PhilGoetz, Snowyowl
comment by gwern · 2010-08-31T13:26:45.478Z · LW(p) · GW(p)

The researchers wrote up their findings on the lottery winners and the accident victims in the Journal of Personality and Social Psychology. The paper is now considered one of the founding texts of happiness studies, a field that has yielded some surprisingly morose results. It’s not just hitting the jackpot that fails to lift spirits; a whole range of activities that people tend to think will make them happy—getting a raise, moving to California, having kids—do not, it turns out, have that effect. (Studies have shown that women find caring for their children less pleasurable than napping or jogging and only slightly more satisfying than doing the dishes.)

http://www.newyorker.com/arts/critics/books/2010/03/22/100322crbo_books_kolbert?currentPage=all

(Glad I kept this citation; knew at some point I would run into someone claiming parenthood is a joy. Wish I had the one that said parenthood was a net gain in happiness only years/decades later after the memories have been distorted enough.)

Replies from: simplicio, xamdam, RobinZ
comment by simplicio · 2010-09-01T03:53:28.195Z · LW(p) · GW(p)

The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your "tell me a story" medium/long term reflective happiness may be quite high.

Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you're anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing.

Basically, happiness is not an integral of moment-to-moment pleasure, so while it's naive to say parenting is an unqualified joy, it's not so bleak as to be only a good thing after the memories are distorted by time.

Replies from: a_parent
comment by a_parent · 2010-09-01T04:40:25.118Z · LW(p) · GW(p)

As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.

But then, my child is indisputably the most lovable child on the planet.

(welcome thread link not necessary)

Replies from: simplicio
comment by simplicio · 2010-09-01T05:05:42.874Z · LW(p) · GW(p)

Then let me just say, welcome!

As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.

I'm inclined to believe you, but note that what you said doesn't quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher.

Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents' love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I'm not sure how much we should trust our own reflections on our happiness.

Replies from: a_parent, gwern
comment by a_parent · 2010-09-01T14:13:44.094Z · LW(p) · GW(p)

note that what you said doesn't quite contradict the hypothesis

Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn't want a child and consequently didn't have one, my DWMM2M happiness would not be as great as in this one. It's just that knowing what I know (including what I've learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.

comment by gwern · 2010-09-01T12:55:20.774Z · LW(p) · GW(p)

My point is that I'm not sure how much we should trust our own reflections on our happiness.

I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)

Replies from: a_parent
comment by a_parent · 2010-09-01T14:14:19.529Z · LW(p) · GW(p)

That oxytocin &c. causes us to bond with and become partial to our children does not make any causally subsequent happiness less real.

Replies from: gwern
comment by gwern · 2010-09-01T14:31:01.704Z · LW(p) · GW(p)

So, then, you would wirehead? It seems to me to be the same position.

Replies from: a_parent, simplicio
comment by a_parent · 2010-09-01T15:33:09.328Z · LW(p) · GW(p)

I wouldn't: I have preferences about the way things actually are, not just how they appear to me or what I'm experiencing at any given moment.

Replies from: gwern
comment by gwern · 2010-09-01T16:35:17.909Z · LW(p) · GW(p)

So that use of oxytocin (and any other fun little biases and sticks and carrots built into us) is a 'noble lie', justified by its results?

In keeping with the Niven theme, so, then you would not object to being tasped by a third party solicitous of your happiness?

Replies from: a_parent
comment by a_parent · 2010-09-01T17:26:22.173Z · LW(p) · GW(p)

Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a 'noble lie'.

I'm not categorically against being tasped by a third party, but I'd want that third party to pay attention to my preferences, not merely my happiness. I'd also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.

Replies from: gwern
comment by gwern · 2010-09-02T12:53:19.127Z · LW(p) · GW(p)

Alright, I'll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother's utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?

Replies from: a_parent
comment by a_parent · 2010-09-02T13:35:27.142Z · LW(p) · GW(p)

They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it's like to become a mother.)

you would allow a third party to tasp you and get you addicted to wireheading

No, I wouldn't. I required the third party to pay attention to my preferences, not just my happiness, and I've already stated my preference to not be wireheaded.

I can't help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one's brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.

Replies from: gwern
comment by gwern · 2010-09-02T14:33:35.428Z · LW(p) · GW(p)

Well, no, maybe you just believe remote stimulation of the pleasure centers of one's brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.

Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don't mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.

Replies from: Perplexed, a_parent, pjeby
comment by Perplexed · 2010-09-02T15:43:36.896Z · LW(p) · GW(p)

there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don't mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.

Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.

More seriously, lets create a series of thought experiments, all involving actions by "Friendly" AI. (FAI. Those were scare quotes. I won't use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.

Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.

Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.

Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.

Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.

Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.

Replies from: timtyler, cousin_it
comment by timtyler · 2010-09-02T21:06:22.843Z · LW(p) · GW(p)

Please, carry on with the scare quotes. Or maybe don't use a capital F.

Apparently: "Friendly Artificial Intelligence" is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word "friendly". However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.

comment by cousin_it · 2010-09-02T21:21:24.719Z · LW(p) · GW(p)

As an aside, I wonder how Eliezer's FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn't look like a good idea to me.

Replies from: Perplexed
comment by Perplexed · 2010-09-02T21:34:36.507Z · LW(p) · GW(p)

How about purely voluntary choice of 'designer babies' for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn't seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.

Replies from: cousin_it
comment by cousin_it · 2010-09-02T21:38:40.034Z · LW(p) · GW(p)

I agree that eugenics is far from the scariest thing FAI could do.

Not sure about designer babies, I don't have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.

comment by a_parent · 2010-09-02T16:13:43.088Z · LW(p) · GW(p)

Yvain wrote:

Only now neuroscientists are starting to recognize a difference between "reward" and "pleasure", or call it "wanting" and "liking"... A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it). When they knocked out the "liking" system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn't show up in the MRI. Knock out "wanting", and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out.

Replies from: gwern
comment by gwern · 2010-09-02T17:47:31.003Z · LW(p) · GW(p)

That's interesting. Hadn't seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the 'wanting' circuit, but wireheading would go through the 'liking' circuit, and so wouldn't resemble the former?

Replies from: a_parent
comment by a_parent · 2010-09-02T23:07:21.183Z · LW(p) · GW(p)

Yvain's post suggested it; I just stuck it in my cache.

comment by pjeby · 2010-09-02T16:16:57.272Z · LW(p) · GW(p)

what else could addiction be motivated by but pleasure?

Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats' pleasure center, only the anticipation center.

Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)

Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.

Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.

Weirdly enough, most true pleasures aren't really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling... these things are addictive precisely because they're not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.

To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were "just about to" get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.

comment by simplicio · 2010-09-01T23:53:49.016Z · LW(p) · GW(p)

It seems to me to be the same position.

Hm... not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn't follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.

I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between "natural" and "artificial" happiness; after all, Nature doesn't have a clue what the difference between them is (nor do I).

Replies from: gwern
comment by gwern · 2010-09-02T12:56:51.389Z · LW(p) · GW(p)

It is definitely not about some false dichotomy between "natural" and "artificial" happiness; after all, Nature doesn't have a clue what the difference between them is (nor do I).

Certainly not, but we do need to understand utility functions and their modification; if we don't, then bad things might happen. For example (I steal this example from EY), a 'FAI' might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.

comment by xamdam · 2010-08-31T16:39:11.660Z · LW(p) · GW(p)

I'm not going to claim having children is "rational", but to judge it by the happiness of "caring for children" is about the same as to judge quality of food by enjoyment of doing the dishes. This is very one-dimensional.

Moreover I actually think it's foolish to use any kind of logical process (such as reading this study) to make decisions in this area except for extreme circumstances such as not having enough money or having genetic diseases.

The reason for my attitude is that I think besides the positive upsides to having kids (there are many, if you're lucky) there is a huge aspect of regret minimization involved; it seems to me Nature choose stick rather than a carrot here.

ETA: I should perhaps say short-term carrot and a long term stick

comment by RobinZ · 2010-08-31T16:06:58.355Z · LW(p) · GW(p)

I wasn't proposing that parenthood is a joy - I may have misunderstood what SilasBarta meant by "utility function places positive weight".

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T15:55:54.416Z · LW(p) · GW(p)

"Utility function of agent A places positive weight on X" is equivalent to "A regards X as a terminal value".

comment by Alicorn · 2010-08-31T12:41:38.598Z · LW(p) · GW(p)

Parfait's Hitchhiker

Now I'm trying to figure out how a parfait could drive a car.

Replies from: gwern, Pavitra
comment by gwern · 2010-08-31T13:26:57.453Z · LW(p) · GW(p)

Deliciously.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T14:01:37.917Z · LW(p) · GW(p)

From the Simpsons: "We would also have accepted 'snacktacularly'."

(For our non-native readers: snacktacular = snack + spectacular.)

comment by SilasBarta · 2010-08-31T17:55:22.321Z · LW(p) · GW(p)

Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".

Does it look similar now?

Replies from: RobinZ, pjeby
comment by RobinZ · 2010-09-01T13:16:05.694Z · LW(p) · GW(p)

I see the parallelism. If you ask me, though, I would say that it's not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T13:55:44.464Z · LW(p) · GW(p)

What's the difference?

Replies from: RobinZ
comment by RobinZ · 2010-09-01T14:11:37.301Z · LW(p) · GW(p)

Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not.

As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T14:28:40.930Z · LW(p) · GW(p)

No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.

Replies from: RobinZ
comment by RobinZ · 2010-09-01T16:07:22.595Z · LW(p) · GW(p)

Let me try to work this out again, from scratch. A Parfit's hitchhiking involves the following steps in order:

  1. Omega examines the agent.
  2. Omega offers the agent the deal.
  3. The agent accepts the deal.
  4. Omega gives the agent utility.
  5. The agent gives Omega utility.

Parenthood breaks this chain in two ways: first, the "Omega" in step 2 is not the "Omega" in step 4, and neither of these are the "Omega" in step 5; and second, step 1 never occurs. Remember, "natural selection" isn't an agent - it's a process, like supply and demand, that necessarily happens.

Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let's say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children.

  1. Om examines Ag and comes to the conclusion that Ag will cooperate.
  2. Om asks Ag to watch Om's children while on the hunt, in exchange for a portion of the proceeds.
  3. Ag agrees.
  4. Ag watches Om's children while Om hunts.
  5. Om returns successful, and gives Ag a share of the bounty.

Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange.

Does that clarify our disagreement?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T16:37:49.272Z · LW(p) · GW(p)

Does that clarify our disagreement?

Somewhat, but I'm confused:

  • Why does it matter that the Omegas are different? (I dispute that they are, but let's ignore that for now.) The parallel only requires functional equivalence to "whatever Omega would do", not Omega's identity persistence. (And indeed Parfit's other point was that the identity distinction is less clear than we might think.)
  • Why does it matter that natural selection isn't an agent? All that's necessary is that it be an optimization process -- Omega's role in the canonical PH would be no different if it were somehow specified to "just" be an optimization process rather than an agent.
  • What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters -- that optimality requires recognition of SAMELs. Here, if Ag doesn't watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn't have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children.
  • (Wow, that's a lot of abbreviations...)
Replies from: RobinZ
comment by RobinZ · 2010-09-01T21:24:52.032Z · LW(p) · GW(p)

Taking your objections out of order:

First: yes, I have the scenario wrong - correct would be to switch Ag and Om, and have:

  1. Om examines Ag and comes to the conclusion that Ag will cooperate.
  2. Om offers to watch Ag's children while Ag hunts, in exchange for a portion of the proceeds.
  3. Ag agrees.
  4. Om watches Ag's children while Ag hunts.
  5. Ag returns successful, and gives Om a share of the bounty.

In this case, Om has already given Ag utility - the ability to hunt - on the expectation that Ag will give up utility - meat - at a later time. I will edit in a note indicating the erroneous formulation in the original comment.

Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent's utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit's hitchhiker is sufficient is to identify a case where the agent's actions will differ.

It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit's hitchhikers do not give money to Nomega, the predictor agent who wasn't at the scene and never gave them a ride - they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn't, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T22:19:32.337Z · LW(p) · GW(p)

I'm not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion:

Therefore: if a parent is willing to make sacrifices when their parent didn't, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?

As best I can understand you, yes. If there's e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no "AM") -- it would have to be chalked up to some random change in its psychology.

However -- and this is the important part -- by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the "organism design space": the part which which is improved my modifications predicated on some period of parent-child care [1].

If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn't assume that the welfare of the child is a terminal value.

And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, "If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here", and it is correct to say that that phenomenon is responsible for the organism's decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions.

As for the EEA DoL arrangement (if the above remarks haven't screened off the point you were making with it): Om can still, er, withhold the children. But let's ignore that possibility on grounds of Least Convenient Possible World. Even so, there are still causal benefits to Ag keeping up its end -- the possibility of making future such arrangements. But let's assume that Ag can still come out ahead by stiffing Om.

In that case, yes, Ag would have to recognize SAMELs to justify paying Om. I'd go on to make the normal point about Ag having already cleaved itself off into the world where there are fewer Om offers if it doesn't see this SAMEL, but honestly, I forgot the point behind this scenario so I'll leave it at that.

(Bitter aside: I wish more of the discussion for my article was like this, rather than being 90% hogged by unrelated arguments about PCT.)

[1] Jaron Lanier refers to this replication mode as "neoteny", which I don't think is the right meaning of the term, but I thought I'd mention it because he discussed the importance of a childhood period in his manifesto that I just read.

[2] I maybe should have added in the article that the reasoning "caring for children = good for fitness" only applies to certain path-dependent domains of attraction in the design space, and doesn't hold for all organisms.

Replies from: RobinZ
comment by RobinZ · 2010-09-01T23:01:40.806Z · LW(p) · GW(p)

This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don't understand your objection to 2a. As far as I can tell, it boils down to "never assume that an agent has terms in its utility functions for other agents", but I'm not assuming - there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes "no child of mine" or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.

(Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical - it has no further significance.)

Replies from: SilasBarta
comment by SilasBarta · 2010-09-01T23:20:38.395Z · LW(p) · GW(p)

your objection to 2a. As far as I can tell, it boils down to "never assume that an agent has terms in its utility functions for other agents",

No, my objection is: "never assume more terminal values (terms in UF) than necessary", and I've shown how you can get away with not assuming that parents terminally value their children -- just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children.

but I'm not assuming - there is an evolutionary advantage to having a term in your utility function for your children.

There is an evolutionary advantage to having a cognitive system that outputs the action "care for children even at cost to self". At a psychological level, this is accomplished by the feeling of "caring" and "love". But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.)

You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs -- and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different -- it's the kind that mustn't be convinced by the lack of a causal benefit to the self.

In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.

My version is precisely written to exclude contracts -- the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don't "attach" themselves to a child-favoring decision theory, they simply don't get "rescued" into the n-th generation of that gene's existence. No need to find an isomorphism to a contract.

[1] Holy Shi-ite -- that's three p-words with a different initial consonant sound!

Replies from: RobinZ
comment by RobinZ · 2010-09-02T00:49:14.223Z · LW(p) · GW(p)

Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit's hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T04:39:18.771Z · LW(p) · GW(p)

I don't understand the point of these questions. You're stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?

Replies from: Perplexed, RobinZ
comment by Perplexed · 2010-09-02T04:53:13.035Z · LW(p) · GW(p)

A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.

Nonetheless, utility mapping functions can change as a result of information which doesn't betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby's first smile. The world has not changed, but somehow your place within it has.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T17:09:37.203Z · LW(p) · GW(p)

See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)

Replies from: Perplexed
comment by Perplexed · 2010-09-02T21:09:45.266Z · LW(p) · GW(p)

How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)

I wasn't trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, "How does the birth of a child trigger a particular special cognitive function?". My answer was that it doesn't. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.

If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:

  • "Occam's razor" suggests that I shouldn't introduce entities (SAMELs, in this case) that I don't really need.
  • "Perplexed's tweezers" suggests that I shouldn't put too much trust in explanations (SAMELs, in this case) that I don't really understand.
Replies from: SilasBarta, thomblake
comment by SilasBarta · 2010-09-02T21:51:09.531Z · LW(p) · GW(p)

Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam's razor), then that would be a reason that my explanation is preferred.

You claim that my explanation fails by this metric:

"Occam's razor" suggests that I shouldn't introduce entities (SAMELs, in this case) that I don't really need.

However, the two theories we're deciding between (2a and 2b) don't explicitly involve SAMELs in either case. [1]

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn't penalize it under Occam's Razor, because that must be assumed in both cases, so there's no net penalty for 2b -- implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).

But to be honest, I'm losing track of the point being established by your objections (for which I apologize), so I'd appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.

[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it's either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.

Replies from: Perplexed
comment by Perplexed · 2010-09-02T22:16:10.999Z · LW(p) · GW(p)

You claim that my explanation fails by this metric:

"Occam's razor" suggests that I shouldn't introduce entities (SAMELs, in this case) that I don't really need.

However, the two theories we're deciding between (2a and 2b) don't explicitly involve SAMELs in either case. [1]

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren't involved at decision time in 2b, just as "Inclusive fitness" and "Hamilton's rule" aren't involved at decision time in 2a.

I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using "revealed preference", whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.

Without Occam, I have to fall back on my second objection, the one I facetiously named "Perplexed's tweezers". I simply don't understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?

If you made this clear already and I failed to pick up on it, I apologize.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T23:06:47.235Z · LW(p) · GW(p)

Ok, I accept your argument that Occam is neutral between you and I.

Hold on -- that's not what I said. I said that it was neutral on the issue of including "they can only use decision theories that could survive natural selection". I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.

SAMELs aren't involved at decision time in 2b, just as "Inclusive fitness" and "Hamilton's rule" aren't involved at decision time in 2a.

It doesn't matter. They (inclusive fitness and Hamilton's rule) have to be assumed (or implied by something that has to be assumed) anyway, because we're dealing with people, so they'll add the same complexity to both explanations.

I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using "revealed preference", whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.

As I've explained to you several times, looking at actions does not imply a unique utility function, so you can't claim that you've measured it just by looking at their actions. The utility functions "I care about myself and my child" and "I care about myself" can produce the same actions, as I've demonstrated, because certain (biologically plausible) decision theories can output the action "care for child at expense of self", even in the absence of a causal benefit to the self.

I simply don't understand your theory well enough to criticize it. ... what is the decision theory you end up with?

It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.

However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn't see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.

You more than made up for it with the Parfit's robot idea, though :-)

Replies from: Perplexed
comment by Perplexed · 2010-09-02T23:54:29.851Z · LW(p) · GW(p)

We are clearly talking past each other, and it does not seem to me that it would be productive to continue.

For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to "revealed preference". You have never acknowledged my response, but continue claiming that you have explained it to me.

I simply don't understand your theory well enough to criticize it. ... what is the decision theory you end up with?

It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones.

I have to interpret that as a policy of using some other kind of "surgery" for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, "So, what kind of surgery do you advocate?" and "How do you know when to use this strange surgery rather than the one Pearl suggests?".

The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.

That sentence may mean something to you, but I can't even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.

... You more than made up for it with the Parfit's robot idea, though.

It wasn't my idea. It was timtyler's. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.

Replies from: timtyler, SilasBarta
comment by timtyler · 2010-09-03T07:22:06.495Z · LW(p) · GW(p)

Too many SAMELs and CAMELs for me. I didn't even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn't sound very interesting; I can't be bothered. Retrospectively, I do now get the bit in the summary - if that is what it is all about. I could probably weigh in on how parental care works in mammals - but without absorbing all the associated context, I doubt I would be contributing positively.

Thanks for the robot credit. It doesn't feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence - so substituting in a machine seems very natural.

Anyway, we don't want you on too different a page - even if it does produce nice stories about the motivtions of stranded hitch-hikers.

comment by SilasBarta · 2010-09-03T01:38:21.652Z · LW(p) · GW(p)

For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to "revealed preference". You have never acknowledged my response, but continue claiming that you have explained it to me.

You have certainly posted responses; I don't recall you saying anything responsive, though, i.e. something that would establish that seeing someone's actions suffices to identify a unique (enough) utility function, at least in this case -- and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you've said something responsive, as I just defined responsive.

I have to interpret that as a policy of using some other kind of "surgery" for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, "So, what kind of surgery do you advocate?" and "How do you know when to use this strange surgery rather than the one Pearl suggests?".

Nothing I've described requires doing anything differently than Pearl's kind of counterfactual surgery. For example, see EY's exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.

And that's really the crux of it: The trick in TDT -- and explaining human behavior with SAMELs -- is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.

Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent's decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:

  • they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
  • are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
  • or anything in between.

TDT, then, doesn't need to posit additional values (like "honor") -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.

The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.

That sentence may mean something to you, but I can't even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.

I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like "love") that is psychologically responsible for the agent's decision. My point is that this "love" action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.

So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.

Replies from: Perplexed
comment by Perplexed · 2010-09-03T03:42:38.890Z · LW(p) · GW(p)

In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.

Regarding "revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding "revealed preference", I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This "revealed preference" dispute is such a mess that I really don't want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.

comment by thomblake · 2010-09-02T21:44:19.577Z · LW(p) · GW(p)

I like the tweezers, but would like a better name for it.

comment by RobinZ · 2010-09-02T06:31:08.664Z · LW(p) · GW(p)

As Perplexed said, there is no requirement that the utility function change - and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.

I'm asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other's writing. It was this last possibility I was probing with that last question.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T17:07:24.831Z · LW(p) · GW(p)

As Perplexed said, there is no requirement that the utility function change - and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.

Okay, but by the same token, there's no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)

None of the things you're pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.

Replies from: RobinZ
comment by RobinZ · 2010-09-02T17:48:02.448Z · LW(p) · GW(p)

(And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)

That's a test that favors the SAMEL explanation, I think.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T19:03:59.358Z · LW(p) · GW(p)

So you're agreeing with me in this one respect? (I don't mean to sound confrontational, I just want to make sure you didn't reverse something by accident.)

Replies from: RobinZ, RobinZ
comment by RobinZ · 2010-09-04T14:39:41.511Z · LW(p) · GW(p)

Right - here's what I've got.

The pattern of "not wanting children, but then wanting to spend resources to care for the children" is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn't mean that the adaptation works on SAMEL patterns - the ability of Parfit's hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-05T15:40:19.142Z · LW(p) · GW(p)

I'm still not following:

  • How does "not wanting children, but then wanting to spend resources to care for the children" involve SAMELs in a way that wanting to have children does not?
  • Yes, you can explain people's pursuance of goals by the reasons they say. The problem is that this isn't the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these -- and I think I've shown you can -- you're left with a superior explanation.
  • The fact that it feels like "pursuing a legacy" on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside -- like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, "If the survivor did not regard it as optimal to pay, the survivor would not be here", and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
  • There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay -- they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery's having alerted them to the optimality of doing so -- which feels like gratefulness.
Replies from: RobinZ
comment by RobinZ · 2010-09-05T22:22:55.429Z · LW(p) · GW(p)
  • My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
  • Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children - only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
  • See above.
  • I am not invested in the word "precommitment" - we are describing the same behavior on the part of the hitchhiker.
Replies from: SilasBarta
comment by SilasBarta · 2010-09-07T14:36:32.333Z · LW(p) · GW(p)

My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.

This is the crux of the matter - desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn't feel like such a recognition -- it feels like an "otherwise-ungrounded inherent deservedness of others of being treated well" (or badly).

Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children - only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children. I am not invested in the word "precommitment" - we are describing the same behavior on the part of the hitchhiker.

Okay, reviewing your point, I have to partially agree -- general desire to act on SAMELs need not be (and probably isn't) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.

In both cases, we have to rely on "if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there", but only in 2a must we elevate this caring to a terminal value for purposes of explanation.

Replies from: Leonhart, RobinZ
comment by Leonhart · 2010-09-07T14:59:00.125Z · LW(p) · GW(p)

It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link.

This is good, but

It does not feel like quickly gathering energy.

is still hiding some confusion (in me, anyway.) Why say that it doesn't feel like quickly gathering energy? What would feel like quickly gathering energy?

I'm now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I'm kinda jealous.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-07T15:17:37.906Z · LW(p) · GW(p)

is still hiding some confusion (in me, anyway.) Why say that it doesn't feel like quickly gathering energy?

Getting a nutrient feed via IV doesn't feel like sweetness, but does involve quickly getting energy.

What would feel like quickly gathering energy?

If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as "feeling like gathering energy". But that requires a whole different architecture.

comment by RobinZ · 2010-09-07T16:14:24.583Z · LW(p) · GW(p)

It sounds like we agree.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-07T16:48:20.538Z · LW(p) · GW(p)

Including about my claim that it provides a more parsimonious explanation of parents' actions not to include concern for their children as a terminal value?

Replies from: RobinZ
comment by RobinZ · 2010-09-07T18:11:29.413Z · LW(p) · GW(p)

Yes - if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don't think I've quite worked out your position on Parfitian hitchhiking, but I don't see any difference between what you claim and what I claim regarding parenthood.

comment by RobinZ · 2010-09-02T19:18:49.366Z · LW(p) · GW(p)

I spoke correctly - I didn't express agreement on the broader issue because I don't want to update too hastily. I'm still thinking.

comment by pjeby · 2010-08-31T18:02:34.535Z · LW(p) · GW(p)

Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".

You should put that in the article. (True, it's a causal iteration rather than an acausal prediction. But it'll still make the article clearer.)

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T18:12:09.626Z · LW(p) · GW(p)

Thanks for the suggestion, I've added it.

comment by PhilGoetz · 2010-08-31T19:15:57.055Z · LW(p) · GW(p)

Now I want a parfait.

comment by Snowyowl · 2010-08-31T13:43:42.887Z · LW(p) · GW(p)

Consider this situation: You are given the choice between personally receiving a small prize or giving your children a much larger prize. Whatever you choose, it is possible that your children will one day face a similar choice. Being your children, they resemble you in many ways and are more likely than not to choose similarly to you. Its not quite a Parfit's Hitchhiker even from your childrens' perspective - the consequences of their choice are in the past, not the future - but it's close, and the result is the same.

Replies from: RobinZ
comment by RobinZ · 2010-08-31T16:10:25.838Z · LW(p) · GW(p)

I see what you mean, but I think the parallel is pretty weak.

comment by orthonormal · 2010-08-31T20:58:15.047Z · LW(p) · GW(p)

This post has its flaws, as has been pointed out, but to add the required nuance would make a book (or at least a LW sequence) out of what is currently a good and provocative post.

The nuances I think are vital:

  1. The explicit consequence-representing/world-modeling parts of our minds exist alongside direct "function calls" like reflexes.
  2. The decision-theory framework only imperfectly describes even that CR/WM part of our minds.
  3. It's quite possible (and indeed virtually certain) that changing the payoff matrix has been done by evolution in certain cases, in addition to implementing a SAMEL-friendly decision theory.
  4. It is clear that SAMELs are not generally consciously accepted by average human beings as reasons for action, to virtually any extent. The SAMEL seems really strange at first, and is accepted as valid (let alone as pervasive) only after long reflection. It's not that we just don't have the understanding of what we do, it seems that we have a mental block against that part of the understanding.
Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T21:15:26.716Z · LW(p) · GW(p)

Thanks for comment! I had actually struggled to keep it from ballooning, and ended up leaving off the part where I applied it to Haidt's work, but then decided I wasn't familiar enough with it to do it justice.

Re 3 and 4, I thought I made clear that SAMELs are not consciously recognized as such by humans, e.g. in the Drescher quote, and when I mentioned that careful self-reflection came late in evolutionary history. However, they do feel differently to act on.

comment by Emile · 2010-08-31T16:07:50.846Z · LW(p) · GW(p)

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

Hmm, so basically evolution started out making creatures with a simple selfish utility function and a straightforward causal decision theory. One improvement would have been to make a make the decision theory "better" (more like Timeless Decision Theory / Updateless Decision Theory), but instead, since evolution is kinda stupid and short-sighted, it made a quick hack to the utility function to make it cover other creatures too, especially those that are "nice" ... and that's how we got our morality (or at least some bits).

(This is in addition to the "selfish gene" thing, where caring about our children (a lot) and next of kin makes sense for basic evolutionary reasons)

comment by kodos96 · 2010-09-02T07:20:01.186Z · LW(p) · GW(p)

Something just occurred to me - the conclusions you reach in this post and in the version of the post on your blog, seem to contradict each other. If moral intuitions really are "the set of intuitions that were selected for because they saw optimality in the absence of a causal link", and if, as you claim on your blog, Parfit's Hitchhiker is a useful model for intellectual property, then why is it that an entire generation of kids... TWO generations really, have grown up now with nearly unanimous moral intuitions telling them there's nothing wrong with "stealing" IP?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T19:26:05.038Z · LW(p) · GW(p)

I'm not entirely clear on where the contradiction is, but I will say that there are both genetic and memetic Parfitian filters, and prevailing memes about IP have kept us from "branching off" into a world we might regard as better, and have thereby been Parfit-filtered out.

I don't claim that the content of moral judgments will be the same, of course, just that similar feelings will exist when one makes them, because (for historical reasons) they impose a very different constraint on our reasoning than the kind that purely considers CaMELs.

But again, I would need a better explanation of the contradiction to give a better reply.

Replies from: kodos96
comment by kodos96 · 2010-09-03T02:24:13.514Z · LW(p) · GW(p)

What I was trying to say was that if optimal behavior on PH-style problems is selected for, and if IP is indeed analogous to PH (with respecting IP rights equivalent to paying Omega), then why hasn't evolution resulted in everyone having the moral intuition to respect IP rights? I suppose the obvious retort is that evolution is slow and hasn't had time to catch up with internet-era IP issues... although I'm not really clear here on whether we're talking about genetic or memetic evolution... I guess I'm not really sure what I'm trying to say.... I think I'm just gonna have to revert to my original reaction, which is that PH is just not an effective intuition pump for IP, hence the confusion.

comment by kodos96 · 2010-08-31T20:40:20.461Z · LW(p) · GW(p)

it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs)

OK, so this may be a completely stupid question, as I'm a total newbie to decision theoryish issues... but couldn't you work non-zero weighting of SAMELs into a decision theory, without abandoning consequentialism, by reformulating "causality" in an MWIish, anthropic kind of way in which you say that an action is causally linked to a consequence if it increases the number of worlds in which the consequence exists? Then SAMELs become not-really-acausal, and you could view winning at PH as simply maximising the number of worlds in which your utility it maximised.

I'm probably making some basic errors of terminology (and reasoning) here, as I'm operating well outside of my comfort zone on this subject matter, so if I'm wrong, or not making any sense, please be gentle in explaining why.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T20:56:43.246Z · LW(p) · GW(p)

I think my (at the time misdirected) comment here is most responsive to your question. In short, causality has a narrow, technical definition here, which corresponds with wide (but not universal) usage. I see nothing wrong with regarding SAMELs as consequences, or saying that e.g. one-boxing causes the sealed box to be filled, but this is incorrect for standard game-theoretic usage of the terms.

comment by PhilGoetz · 2010-08-31T19:15:10.313Z · LW(p) · GW(p)

Sure. Morals = the part of our utility function that benefits our genes more than us. But is this telling us anything we didn't know since reading The Selfish Gene? Or any problems with standard decision theory? There's no need to invoke Omega, or a new decision theory. Instead of recognizing that you can use standard decision theory, but measure utility as gene copies rather than as a human carrier's qualia, you seem to be trying to find a decision theory for the human that will implement the gene's utility function.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-31T19:40:43.764Z · LW(p) · GW(p)

What I added to decision theory beyond Selfish Gene's arguments are:

  • An explanation for the psychological mechanisms of moral intuitions -- i.e. why reasoning about moral issues feels different, and why we have such a category.

  • Why you shouldn't take existing and ideal utility functions as being peppered with numerous terminal values (like "honor" and "gratefulness" and "non-backstabbing"), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).

comment by Will_Sawin · 2010-08-31T15:53:37.673Z · LW(p) · GW(p)

Thinking of it as being limited to using a specific decision theory is incorrect. Instead, it should simply be seen as using a specific decision theory, or one of many. It's not like evolution and such are here right now, guiding your actions. Evolution acts through our genes, which program us to do a specific thing.

Why do the richest people on Earth spend so much time and money helping out the poorest? Is that what a rational agent with a Parfit-winning decision theory would do?

Replies from: taw, wnoise
comment by taw · 2010-09-02T04:04:43.076Z · LW(p) · GW(p)

Why do the richest people on Earth spend so much time and money helping out the poorest?

It's rapidly diminishing utility of money. Every extra million is just worth less and less. At some point all these millions are worth so little than you can as well give them away, if you place even very low value on others' well-being.

What else could Buffett spend all his money on now?

comment by wnoise · 2010-08-31T16:49:42.744Z · LW(p) · GW(p)

Why do the richest people on Earth spend so much time and money helping out the poorest?

In general, they don't. The ones that do (Gates, Buffet, etc) get a fair bit of attention for it.

comment by Wei Dai (Wei_Dai) · 2010-08-31T00:44:42.543Z · LW(p) · GW(p)

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

I don't think any decision theory that has been proposed by anyone so far has this property. You might want to either change the example, or explain what you're talking about here...

comment by taw · 2010-09-02T03:48:26.762Z · LW(p) · GW(p)

Strongly downvoted for using omega to rationalize your pre-existing political positions.

You use methods of rationality to fail at substance of rationality.

Replies from: SilasBarta, simplicio, kodos96
comment by SilasBarta · 2010-09-02T04:16:11.896Z · LW(p) · GW(p)

What simplicio and kodos said. Yes, you may have a point that I shouldn't have linked the blog post at all, nor implied it was a pre-requisite by calling this a follow-up. I did that because of a) the similarity of many points, and b) encouragement from others who had seen it. However, that blog post is not a true pre-requisite; this article does not have any ambiguities that are only cleared up by going to my blog.

Second of all, if you think my position on IP was somehow pre-existing and that I was just rationalizing something I'd believe no matter what ... then wow, you have a lot of learning to do. I used to be just as anti-IP as any other Stephan Kinsella acolyte, and I can cite you posts from a period of at least 5 years, before many gnawing objections kept coming to mind. The blog post explores the intuitions that lead me to find the common anti-IP absolutism so troubling -- specifically, it's similarity to the reasoning of "hey, I've already been rescued, why should I pay?"

I would be glad to take you through ten years of my internet posts to show that IP isn't something I argue for "no matter what", but something that has inspired tremendous soul-searching. And you'll find that most of my postings aren't even arguing for one side or the other, but pointing out problems in specific arguments, without reference to whether the conclusion is nonetheless valid. The merit of, "Hey, I already know your idea now, why should I give you anything?" is something that they can debate whether or not they support IP.

Could you instead share your thoughts on the content I've posted specifically on LW?

comment by simplicio · 2010-09-02T04:00:09.708Z · LW(p) · GW(p)

(a) In this post? No. His blog, maybe... so why downvote this post?

(b) I would call that a concern or suspicion, not a certainty.

Moreover Silas appears to be mostly arguing descriptively for an etiology of moral intuitions, so it's not too surprising that his moral intuitions remain relatively unchallenged. I can give you my meta-ethic as well; shockingly, developing it did not result in my deciding that torturing babies is the summum bonum of existence.

Replies from: kodos96, taw
comment by kodos96 · 2010-09-02T04:21:55.848Z · LW(p) · GW(p)

I can give you my meta-ethic as well; shockingly, developing it did not result in my deciding that torturing babies is the summum bonum of existence.

I hate intellectual property as much as the next internet peanut gallery member (as you can see if you click through to Silas' personal blog post)... but even I would have to say that comparing it to torturing babies is a bit of a stretch.

.......a little bit of a stretch anyway ;)

Replies from: simplicio
comment by simplicio · 2010-09-02T04:29:38.413Z · LW(p) · GW(p)

Heh, I got carried away. I was not making a comparison to IP (about which I'm ambivalent), just pointing out that developing moral theories is one case where we want the theories to mostly fit the intuitions, so changing your mind is less expected.

comment by taw · 2010-09-02T04:06:25.286Z · LW(p) · GW(p)

This moral theory seems designed just for this kind of rationalizations.

Replies from: simplicio
comment by simplicio · 2010-09-02T04:13:15.944Z · LW(p) · GW(p)

That is a legitimate concern. Can you think of two mutually contradictory moral positions than could both be plausibly argued using this approach?

/sets timer for 2 min

The best I can do is think of a case where two Parfitian imperatives clash: i.e., a conflict between two of these counterfactual imperatives. Not a very strong objection at all, but then I am sympathetic to the theory evinced here & do not trust my ability to see its flaws.

Replies from: Perplexed, taw
comment by Perplexed · 2010-09-02T04:19:09.691Z · LW(p) · GW(p)

Any system that permits the derivation of "ought" from "is" is susceptible to having people with differing experiences regarding "is" come to different conclusions regarding "ought".

comment by taw · 2010-09-02T05:01:24.267Z · LW(p) · GW(p)

Here's the original form:

So, to wrap it up, what does Parfit's Hitchhiker have to do with intellectual property? Well:

  • Omega represents the people who are deciding whether to produce difficult, satisfying intellectual works, conditional on whether we will respect certain exclusivity rights that have historically been promised them.
  • The decision to rescue us is the decision to produce those intellectual works.
  • The decision to pay the $5 represents the decision to continue to respect that exclusivity once it is produced "even though" they're "not scarce anymore", and we could choose otherwise.

But it could just as easily be:

  • Omega represents the people who are making their work freely available, conditional on whether we will keep derivative works likewise freely available
  • The decision to rescue us is the decision to produce those intellectual works
  • The decision to pay the $5 represents the decision to make your work freely available, "even though" you can as well stick a copyright on it, and make some money

Can it get more opposite? Full rejection of IP, with form identical to supportive argument.

You can rationalize anything this way.

Replies from: kodos96, SilasBarta
comment by kodos96 · 2010-09-02T05:17:04.147Z · LW(p) · GW(p)

Interesting argument, but it should probably have been made on Silas' blog, not here.

Replies from: taw
comment by taw · 2010-09-02T05:36:07.082Z · LW(p) · GW(p)

I'm not arguing for or against copyrights on this basis; I was just a convenient example of Parfitian reasoning that I could conveniently twist.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T18:51:01.327Z · LW(p) · GW(p)

Could you instead show us an example of how to twist the Parfitian reasoning that's actually used in the article on this site?

comment by SilasBarta · 2010-09-02T05:11:46.014Z · LW(p) · GW(p)

Except that you can still make free works available, conditional on derivative works being freely available as well, even in IP systems, but you can't make gated works in any sense without IP; and producing works that are good enough to make money under copyright (but released without) involves a non-trivial cost, unlike the cost of not-using a work that only exists because of a creator (and given that the creator was the reason for its existence). And you've broken the role of SAMELs, as the want-to-profit creators aren't subjunctively self-defeating their ability to produce works, as they wouldn't be able to use the free ones.

(SAMEL is an abbreviation I used on this site but not my blog.)

So all the crucial properties are absent in the reverse case. A good attempt, though.

Replies from: taw
comment by taw · 2010-09-02T05:33:13.894Z · LW(p) · GW(p)

I can think of a lot of nitpicking applicable to both scenarios. Like this:

Copyleft is a very limited tool, especially against patents, while without IP you can produce many works with other forms of funding - like work contracts. It's nearly impossible to produce a work that isn't based on other's freely available works (regardless if this kind of derivation counts as legally "derivative work" or not), while sticking IP on trivial things just because you can is commonplace. In sufficiently strong IP system pretty much nothing would ever be created, because everything would violate far too many other people's IP, so it is indeed self-defeating.

I'm sure you can find some other minor differences, and we could go on indefinitely, at least until we figured out that maybe this isn't the best way to reason.

On another level, I have no idea why you used Omaga as analogous to IP instead of far more obvious analogy to plain old legally enforceable contracts.

The only defense for IP that makes even tiniest bit of economic sense is that transaction costs would prevent consumers negotiating with producers. By straightforward Coase theorem reasoning, for any work that would be profitably produced in IP-based system, at least as good or better outcome could be achieved without IP system if transaction and negotiation costs were zero (plus a few other totally unrealistic assumptions, but none worse than assuming omniscient Omega).

Replies from: SilasBarta, SilasBarta
comment by SilasBarta · 2010-09-02T15:22:35.952Z · LW(p) · GW(p)

Much as I'd like to reply, I prefer LW's norm, so I'm going to grant you a heckler's veto until you can move these criticisms to my blog.

Replies from: taw
comment by taw · 2010-09-02T17:10:51.880Z · LW(p) · GW(p)

My point isn't about IP, it's about how easy it is to twist this way of reasoning with story and analogy in any direction you want by choosing a different analogy.

If your original post was anti-IP, I'd just twist it into pro-IP case. Or if you used aynrandist story about "self-ownership" + analogy to capitalism, I'd use a different analogy that makes it strongly oppose capitalism. Or whatever.

As long as there's "let's pick arbitrary analogy" step anywhere in your reasoning system, it's all infinitely twistable.

The part about Coase theorem was about how your analogy choice was highly unusual. Not that using a more obvious one would entirely avoid the problem.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T17:17:31.184Z · LW(p) · GW(p)

Where does the article that is on this site make this flaw in reasoning?

comment by SilasBarta · 2010-09-02T15:22:47.703Z · LW(p) · GW(p)

Much as I'd like to reply, I prefer LW's norm, so I'm going to grant you a heckler's veto until you can move these criticisms to my blog.

comment by kodos96 · 2010-09-02T03:58:10.424Z · LW(p) · GW(p)

Strongly downvoted for using omega to rationalize your pre-existing political positions.

Huh? On his personal blog, yes. In this post? No.