The Power of Agency

post by lukeprog · 2011-05-07T01:38:39.042Z · LW · GW · Legacy · 78 comments

You are not a Bayesian homunculus whose reasoning is 'corrupted' by cognitive biases.

You just are cognitive biases.

You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the 'elephant' of your mind, and atop them rides a tiny 'deliberative thinking' module that only rarely exerts itself, and almost never according to normatively correct reasoning.

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

You do not have much cognitive access to your motivations. You are not Aristotle's 'rational animal.' You are Gazzaniga's rationalizing animal. Most of the time, your unconscious makes a decision, and then you become consciously aware of an intention to act, and then your brain invents a rationalization for the motivations behind your actions.

If an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs, then few humans are very 'agenty' at all. You may be agenty when you guide a piece of chocolate into your mouth, but you are not very agenty when you navigate the world on a broader scale. On the scale of days or weeks, your actions result from a kludge of evolved mechanisms that are often function-specific and maladapted to your current environment. You are an adaptation-executor, not a fitness-maximizer.

Agency is rare but powerful. Homo economicus is a myth, but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris. Imagine what you could do if you were just a bit more agenty. That's what training in instrumental rationality is all about: transcending your kludginess to attain a bit more agenty-ness.

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

(This post was inspired by some conversations with Michael Vassar.)

78 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2011-05-07T04:31:02.749Z · LW(p) · GW(p)

I radically distrust the message of this short piece. It's a positive affirmation for "rationalists" of the contemporary sort who want to use brain science to become super-achievers. The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you'll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

The messy, inconsistent, and equivocating aspects of the mind can also be adaptive. They can save you from fanaticism, lack of perspective, and self-deception. How often do situations really permit a calculation of expected utility? All these rationalist techniques themselves are fuel for rationalization: I'm employing all the special heuristics and psychological tricks, so I must be doing the right thing. I've been so focused lately, my life breakthrough must be just around the corner.

It's funny that here, the use of reason has become synonymous with "winning" and the successful achievement of plans, when historically, the use of reason was thought to promote detachment from life and a moderation of emotional extremes, especially in the face of failure.

Replies from: Tyrrell_McAllister, Kaj_Sotala, gwern, lukeprog, MinibearRex
comment by Tyrrell_McAllister · 2011-05-07T15:55:36.998Z · LW(p) · GW(p)

The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you'll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

How could you reliably know these things, and how could you make intentional use of that knowledge, if not with agentful rationality?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2011-05-08T08:39:15.344Z · LW(p) · GW(p)

You can't. I won't deny the appeal of Luke's writing; it reminds me of Gurdjieff, telling everyone to wake up. But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.

Replies from: NancyLebovitz, NancyLebovitz, wedrifid
comment by NancyLebovitz · 2011-05-09T00:56:33.262Z · LW(p) · GW(p)

This is reminding me of Steve Pavlina's material about light-workers and dark-workers. He claims that working to make the world a better place for everyone can work, and will eventually lead you to realizing that you need to take care of yourself, and that working to make your life better exclusive of concern for others can work and will eventually convince you of the benefits of cooperation, but that slopping around without being clear about who you're benefiting won't work as well as either of those.

comment by NancyLebovitz · 2011-05-09T00:52:19.883Z · LW(p) · GW(p)

How can you tell the ratio between Homo machiavelliensis and Homo economicus, considering that HM is strongly motivated to conceal what they're doing, and HM and HE are probably both underestimating the amount of luck required for their success?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2011-05-09T05:51:37.048Z · LW(p) · GW(p)

How can you tell the ratio

fMRI? Also, some HE would be failed HM. The model I'm developing is that in any field of endeavor, there are one or two HMs at the top, and then an order-of-magnitude more HE also-rans. The intuitive distinction: HE plays by the rules, HM doesn't; victorious HM sets the rules to its advantage, HE submits and gets the left-over payoffs it can accrue by working within a system built by and for HMs.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-05-09T07:34:55.658Z · LW(p) · GW(p)

My point was that both the "honesty is the best policy" and the "never give a sucker an even break" crews are guessing because the information isn't out there.

My guess is that different systems reward different amounts of cheating, and aside from luck, one of the factors contributing to success may be a finely tuned sense of when to cheat and when not.

Replies from: cousin_it
comment by cousin_it · 2011-05-09T07:47:14.116Z · LW(p) · GW(p)

Yeah, and the people who have the finest-tuned sense of when to cheat are the people who spent the most effort on tuning it!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-05-09T07:52:26.394Z · LW(p) · GW(p)

I suspect some degree of sarcasm, but that's actually an interesting topic. After all, a successful cheater can't afford to get caught very much in the process of learning how much to cheat.

comment by wedrifid · 2011-05-08T09:30:22.448Z · LW(p) · GW(p)

But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.

Love the expression. :)

comment by Kaj_Sotala · 2011-05-08T10:51:21.767Z · LW(p) · GW(p)

I radically distrust the message of this short piece. It's a positive affirmation for "rationalists" of the contemporary sort who want to use brain science to become super-achievers.

Interesting. Personally I read it as a kind of "get back to Earth" message. "Stop pretending you're basically a rational thinker and only need to correct some biases to truly achieve that. You're this horrible jury-rig of biases and ancient heuristics, and yes while steps towards rationality can make you perform much better, you're still fundamentally and irreparably broken. Deal with it."

But re-reading it, your interpretation is probably closer to the mark.

comment by gwern · 2012-02-08T18:02:21.678Z · LW(p) · GW(p)

Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them.

Agency is still pretty absent there too. As it happens, I have something of an essay on just that topic: http://www.gwern.net/on-really-trying#on-the-absence-of-true-fanatics

comment by lukeprog · 2011-05-07T05:40:37.852Z · LW(p) · GW(p)

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This if false.

most forms of achievement do require domain-specific expertise; you don't get to the top just by looking pretty and statusful.

Yes. And domain-specific expertise is something that can be learned and practiced, by applying agency to one's life. I'll add it to the list.

Replies from: Mitchell_Porter, wedrifid
comment by Mitchell_Porter · 2011-05-07T06:31:49.527Z · LW(p) · GW(p)

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This is false.

If we are talking about how to become rich, famous, and a historically significant person, I suspect that neither of us speaks with real authority. And of course, just being evil is not by itself a guaranteed path to the top! But I'm sure it helps to clear the way.

Replies from: lukeprog
comment by lukeprog · 2011-05-07T06:38:08.255Z · LW(p) · GW(p)

I'm sure it helps to clear the way.

Sure. I'm only disagreeing with what you said in your original comment.

comment by wedrifid · 2011-05-07T06:44:08.506Z · LW(p) · GW(p)

You also need to be willing to lie, cheat, steal, kill, use people, betray them.

This if false.

I would say 'overstated'. I assert that most people who became famous, rich and a historical figure used those tactics. More so the 'use people', 'betray them' and 'lie' than the more banal 'evils'. You don't even get to have a solid reputation for being nice and ethical without using dubiously ethical tactics to enforce the desired reputation.

Replies from: katydee
comment by katydee · 2011-05-07T08:19:49.715Z · LW(p) · GW(p)

Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

Replies from: wedrifid
comment by wedrifid · 2011-05-07T11:02:44.627Z · LW(p) · GW(p)

Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.

I don't have a personal statement to make about my strategy for gaining a reputation for niceness. Partly because that is a reputation I would prefer to avoid.

I do make the general, objective level claim that actually being nice and ethical is not the most effective way to gain that reputation. It is a good default and for many, particularly those who are not very good at well calibrated hypocrisy and deception, it is the best they could do without putting in a lot of effort. But it should be obvious that the task of creating an appearance of a thing is different to that of actually doing a thing.

comment by MinibearRex · 2011-05-07T05:03:54.309Z · LW(p) · GW(p)

It's funny that here, the use of reason has become synonymous with "winning"

I don't think anyone's arguing that "reason" is synonymous with winning. There are a lot of people, however, arguing that "rationality" is systematized winning. I'm not particularly interested in detaching from life and moderating my emotional response to failure. I have important goals that I want to achieve, and failing is not an acceptable option to me. So I study rationality. Honestly, EY said it best:

There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do. But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable. Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn't. If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.

No. If the "irrational" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is "rational"

comment by DSimon · 2011-05-07T23:22:45.372Z · LW(p) · GW(p)

Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

Can you go into more detail about how you believe these particular people behaved more agenty than normal?

comment by lukeprog · 2011-05-07T02:42:56.914Z · LW(p) · GW(p)

In case anybody asks how I was able to research and write two posts from scratch today:

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

(I'm so not kidding. Ask Jasen Murray.)

Replies from: bentarm, Larks, Miller, Louie, wedrifid, Cayenne, None, jimrandomh
comment by bentarm · 2011-05-07T11:09:53.784Z · LW(p) · GW(p)

If this actually works reliably, I think it is much more important than anything in either of the posts you used it to write - why bury it in a comment?

comment by Larks · 2011-05-07T18:38:07.770Z · LW(p) · GW(p)

I don't know if it's the song or the placebo effect, but it's just written my thesis proposal for me.

Replies from: lukeprog
comment by lukeprog · 2011-05-07T20:50:35.433Z · LW(p) · GW(p)

Congrats!

comment by Miller · 2011-05-07T03:01:04.104Z · LW(p) · GW(p)

Well there's a piece of music easy to date to circa Blade Runner.

Replies from: lukeprog
comment by lukeprog · 2011-05-07T03:17:34.958Z · LW(p) · GW(p)

Maybe tomorrow I will try the Chariots of Fire theme, see what it does for me. :)

Hmmm. I wonder what else I've spent an entire day listening to over and over again while writing. Maybe Music for 18 Musicians, Tarot Sport, Masses, and Third Ear Band.

Replies from: curiousepic
comment by curiousepic · 2011-05-07T17:15:49.223Z · LW(p) · GW(p)

I just came across Tarot Sport; it's the most insomnia-inducing trance I've ever heard.

comment by Louie · 2011-05-08T15:56:38.655Z · LW(p) · GW(p)

I liked that song but then ended up listening to the #2 most popular song on that site instead. It provided me with decidedly less motivation. ;)

Replies from: lukeprog
comment by lukeprog · 2011-05-08T16:25:35.463Z · LW(p) · GW(p)

I just listened to four seconds of that song and then hit 'back' in my browser to write this comment. 'Ugh' to that song.

comment by wedrifid · 2011-05-07T05:37:20.062Z · LW(p) · GW(p)

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

Just listened to it. The first minute or so especially gave an effect about as much as a strong coffee. A little agitating but motivating.

comment by Cayenne · 2011-05-07T04:00:16.498Z · LW(p) · GW(p)

How do we know that it's you writing, and not the music?

(Just kidding, really.)

Edit - please disregard this post

comment by [deleted] · 2011-05-18T15:09:49.743Z · LW(p) · GW(p)

That is strange. I like the song though, thanks for passing it along. Like one of the other commenters, I will be testing out its effects.

Do you think if you listened to the song every day, or 3 days a week, or something, the effect on your productivity or peace of mind would dwindle? If not, do you plan to continue listening to it a disproportionate amount relative to other music?

ETA random comment: Something about it reminds me of the movie Legend.

comment by jimrandomh · 2011-05-08T16:48:01.820Z · LW(p) · GW(p)

In case anybody asks how I was able to research and write two posts from scratch today:

It's largely because I've had Ray Lynch's 'The Oh of Pleasure' on continuous repeat ever since 7am, without a break.

I don't believe this is really the cause, but I'm going to listen to it at work tomorrow just in case.

comment by JohnH · 2011-05-07T04:58:33.516Z · LW(p) · GW(p)

real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn body language and fashion and salesmanship and seduction and the laws of money and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

In the real world those that have less access to these traits (being people of the autistic spectrum, for example) tend to have a much harder time learning how to accomplish any of the named tasks. They also, for most of those tasks, have a much harder time seeing why one would wish to accomplish those tasks.

Extrapolating to a being that has absolutely no such intuition or heuristics then one is left with the question of what it is that they wish to actually do? Perhaps some of the severely autistic really are like this and never learn language as it never occurs to them that language could be useful and so have no desire to learn language.

With no built in programing to determine what is to be desired and what is not to be desired and no built in programing as to how the world works or does not work then how is one to determine what should be desirable or how to accomplish what is desired? As far as I can determine an agent without human hardware or software may be left spending its time attempting to figure out how anything works and figuring out what, if anything, it wants to do.

It may not even attempt to figure out anything at all if Curiosity is not rational but a built in heuristic. Perhaps someone has managed to build a rational AI but has neglected to give it built in desires and/or built in Curiosity and it did nothing so was assumed to not have worked.

Isn't even the desire to survive a heuristic?

Replies from: lukeprog
comment by lukeprog · 2011-05-07T06:05:52.544Z · LW(p) · GW(p)

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

Sure. But are you denying these skills can be vastly improved by applying agency?

You mention severe autistics. I'm not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won't help much. I wasn't trying to claim that agency would radically improve the capabilities of every single human ever born.

Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don't believe that. In fact, the next post in my Intuitions and Philosophy sequence is entitled 'When Intuitions are Useful.'

Replies from: JohnH
comment by JohnH · 2011-05-07T06:15:38.308Z · LW(p) · GW(p)

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

This is what I am reacting to, especially when combined with what I previously quoted.

Replies from: lukeprog
comment by lukeprog · 2011-05-07T06:30:40.550Z · LW(p) · GW(p)

Oh. So... are you suggesting that a software agent can't learn body language, fashion, seduction, networking, etc.? I'm not sure what you're saying.

Replies from: JohnH
comment by JohnH · 2011-05-07T06:48:30.408Z · LW(p) · GW(p)

I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?

Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.

If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?

Replies from: byrnema, lukeprog
comment by byrnema · 2011-05-09T12:57:50.080Z · LW(p) · GW(p)

I'll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)

And, imagine what an agent could do without the limits of human hardware or software.

I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn't have any goals or desires.

Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that's not what happened -- I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.

I feel like this experiment helped me identify which goals are built in and which are abstract and more fully 'chosen'. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.

These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.

comment by lukeprog · 2011-05-07T06:53:25.428Z · LW(p) · GW(p)

Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.

Replies from: JohnH
comment by JohnH · 2011-05-07T07:09:08.857Z · LW(p) · GW(p)

Precisely, that utility function is heuristic or intuition. Further survival can only be desired according to prior knowledge of the environment, so again a heuristic or intuition. It is also dependent on the actions that it is aware that it can perform (intuition or heuristic). One can only be an agent when placed in an environment, given some set of desires (heuristic) (and ways to measure accomplishing those desires), and given a basic understanding of what actions are possible (intuition), as well as whatever basic understanding of the environment is needed to be able to reason about the environment (intuition).

I assume chapter 2 of the 2nd edition is sufficiently close to chapter 2 of the 3rd edition?

Replies from: lukeprog
comment by lukeprog · 2011-05-07T07:19:13.555Z · LW(p) · GW(p)

I don't understand you. We must be using the terms 'heuristic' and 'intuition' to mean different things.

Replies from: JohnH
comment by JohnH · 2011-05-07T07:22:15.371Z · LW(p) · GW(p)

A pre-programed set of assumptions or desires that are not chosen rationally by the agent in question.

edit: perhaps you should look up 37 ways that words can be wrong

Also, you appear to be familiar with some philosophy so one could say they are A Priori models and desires in the sense of Plato or Kant.

Replies from: lukeprog, Peterdjones
comment by lukeprog · 2011-05-07T08:11:31.983Z · LW(p) · GW(p)

If this is where you're going, then I don't understand the connection to my original post.

Which sentence(s) of my original post do you disagree with, and why?

Replies from: JohnH
comment by JohnH · 2011-05-07T14:42:17.817Z · LW(p) · GW(p)

I have already gone over this.

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.

It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.

a real agent with the power to reliably do things it believed would fulfill its desires

There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.

edit Also

You do not have much cognitive access to your motivations.

This is said as a bad thing when it is a necessary thing.

Replies from: lukeprog, None
comment by lukeprog · 2011-05-07T15:42:19.412Z · LW(p) · GW(p)

Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.

Desires/goals/utility functions are non-rational, but I don't know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn't mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing.

Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.

There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions

Agreed. This is the Humean theory of motivation, which I agree with. I don't see how anything I said disagrees with the Humean theory of motivation.

This is said as a bad thing when it is a necessary thing.

I didn't say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it's not a necessary thing that we don't have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.

JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I'm still mostly assuming that, actually.

Replies from: wedrifid, JohnH
comment by wedrifid · 2011-05-07T16:54:42.952Z · LW(p) · GW(p)

an artificial agent needs restrictions and assumptions in order to do something.

You need to assume inductive priors. Otherwise you're pretty much screwed.

comment by JohnH · 2011-05-07T18:07:52.196Z · LW(p) · GW(p)

wedrifid has explained the restriction part well.

Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.

Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.

In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.

Even if this is true any motivation to modify our motivations would itself be based on our motivations.

the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly.

I do not see how anything I said is obviously false. Please explain this.

Replies from: lukeprog
comment by lukeprog · 2011-05-07T18:14:14.362Z · LW(p) · GW(p)

Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.

Sure. Like, its utility function. How does anything you're saying contradict what I claimed in my original post?

Sorry, I still haven't gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...

Replies from: JohnH
comment by JohnH · 2011-05-07T20:33:32.096Z · LW(p) · GW(p)

Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?

I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).

Replies from: nshepperd, Barry_Cotter
comment by nshepperd · 2011-05-08T03:11:34.235Z · LW(p) · GW(p)

You seem to have used the words 'heuristic' and 'intuition' to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning "a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)". It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like "but such an agent won't do anything without an occam prior and terminal values", to which lukeprog responded "but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory".

Basically, I suggest you Taboo "intuition" and "heuristic" (and/or read over your own posts with "computationally tractable approximation" substituted for "intuition" and "heuristic", to see what lukeprog thinks is 'obviously false').

Replies from: JohnH
comment by JohnH · 2011-05-09T01:04:29.018Z · LW(p) · GW(p)

Thank you for that, I will check over it.

comment by Barry_Cotter · 2011-05-08T21:25:44.699Z · LW(p) · GW(p)

Luke isn't arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.

A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn't. These can be arbitrary for all we care. But, any intelligence no matter it's goals or utility function will want to achieve things, after all that's what it means to have goals. If it has sufficient computational power handy it'll use an accurate estimator, if not a heuristic.

Heuristics have nothing to do with goals, adaptations not ends

comment by [deleted] · 2011-05-07T15:06:00.575Z · LW(p) · GW(p)

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.

Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it's a good idea to do so.

comment by Peterdjones · 2011-05-07T16:25:58.813Z · LW(p) · GW(p)

Intuitions are usually defined as being inexplicable. Apriori claims are usually explicable in terms of axioms, although axioms may be chosen for their intuitive appeal.

Replies from: JohnH
comment by JohnH · 2011-05-07T18:09:17.686Z · LW(p) · GW(p)

although axioms may be chosen for their intuitive appeal.

precisely.

comment by [deleted] · 2011-05-08T01:45:47.488Z · LW(p) · GW(p)

You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.

Am I wrong in taking this to be a one-liner critique of all virtue ethical theories?

comment by Giles · 2011-05-07T22:30:45.717Z · LW(p) · GW(p)

I've been thinking about this with regards to Less Wrong culture. I had pictured your "deliberative thinking" module as more of an "excuse generator" - the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.

The excuse generator is primarily social - it will build excuses which are appropriate to the culture it is in. So in a rationalist culture, it will come up with rationalizing excuses. It can be exposed to a lot of memes, parrot them back and reason using them without actually affecting your behavior in any way at all.

Just sometimes though, the excuse generator will fail and send a signal back to the rest of the mind that it really needs to change something, else it will face social consequences.

The thing is, I don't feel that this stuff is new. But try and point it out to anyone, and they will generate excuses as to why it doesn't matter, or why everyone lacks the power of agency except them, or that it's an interesting question they'll get around to looking at sometime.

So currently I'm a bit stuck.

Replies from: lessdazed, Error
comment by lessdazed · 2011-05-08T06:26:57.253Z · LW(p) · GW(p)

But try and point it out to anyone, and they will...

...act as predicted by the model.

comment by Error · 2013-02-25T20:59:52.596Z · LW(p) · GW(p)

I had pictured your "deliberative thinking" module as more of an "excuse generator" - the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.

I know this is a year or two late, but: I've noticed this and find it incredibly frustrating. Turning introspection (yes, I know) on my own internally-stated motivations more often than not reveals them to be either excuses or just plain bullshit. The most frequent failure mode is finding that I did , not because it was good, but because I wanted to be seen as the sort of person who would do it. Try though I might, it seems incredibly difficult to get my brain to not output Frankfurtian Bullshit.

I sort-of-intend to write a post about it one of these days.

comment by adamisom · 2011-11-01T05:50:07.240Z · LW(p) · GW(p)

I loved this, but I'm not here to contribute bland praise. I'm here to point out somebody who does, in fact, behave as an agent as defined by the italicized statement: "reliably do things it believed would fulfill its desires" that continues with "It could change its diet, work out each morning, and maximize its health and physical attractiveness. " I couldn't help but think of Scott H Young, a blogger I've been following for months. I really look up to that guy. He is effectively a paragon of the model that you can shape your life to live it as you like. (I'm sure he would never say that though.) He actually referenced a Less Wrong article recently, and it's not the first time he's done it, which significantly increased my opinion of him. His current "thing" is trying to master the equivalent of a rigorous CS curriculum (using MIT's requirements) in 12 months. Only those on the Less Wrong community stand a good chance of not thinking that's pretty audacious.

Replies from: arundelo
comment by arundelo · 2011-11-01T06:23:46.680Z · LW(p) · GW(p)

http://lesswrong.com/user/ScottHYoung

Replies from: adamisom
comment by adamisom · 2011-11-02T03:30:21.118Z · LW(p) · GW(p)

Thanks, I should've known

comment by cousin_it · 2017-06-09T11:34:15.784Z · LW(p) · GW(p)

Coming back to this post, I feel like it's selling a dream that promises too much. I've come to think of such dreams as Marlboro Country ads. For every person who gets inspired to change, ten others will be slightly harmed because it's another standard they can't achieve, even if they buy what you're selling. Figuring out more realistic promises would do us all a lot of good.

comment by Gleb_Tsipursky · 2014-11-13T14:25:32.143Z · LW(p) · GW(p)

Excellent clarion call to raise our expectation of what agency is and can do in our lives, as well as to have sensible expectations of our and others' humble default states. Well done.

comment by [deleted] · 2011-05-07T16:06:17.377Z · LW(p) · GW(p)

One way of thinking about this:

There is behavior, which is anything an animal with a nervous system does with its voluntary musculature. Everything you do all day is behavior.

Then there are choices, which are behaviors you take because you think they will bring about an outcome you desire. (Forget about utility functions -- I'm not sure all human desires can be described by one twice-differentiable convex function. Just think about actions taken to fulfill desires or values.) Not all behaviors are choices. In fact it's easy to go through a day without making any choices at all. Mostly by following habits or instinctive reactions.

In classical economics, all behaviors are modeled as choices. That's not true of people in practice, but possibly some people choose a higher percentage of their behaviors than other people do. Maybe it's possible to train yourself to make more of your behaviors into choices. (In fact, just learning Econ 101 made me more inclined to consciously choose my behaviors.)

Replies from: Swimmer963, wedrifid
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-07T20:34:31.810Z · LW(p) · GW(p)

Not all behaviors are choices. In fact it's easy to go through a day without making any choices at all. Mostly by following habits or instinctive reactions.

There is a reason for this. Making choices constantly is exhausting, especially if you consider all of the possible behaviours. For me, the way to go is to choose your habits. For example: I choose not to spend money on eating out. This a) saves me money, and b) saves me from extra calories in fast food. When pictures of food on a store window tempt me, I only have to appeal to my habit of not eating out. It's barely conscious now. If I forget to pack enough food from home and I find myself hungry, and the ads unusually tempting, I make a choice to reinforce my habit by not buying food, although I am hungry and there is a cost to myself. The same goes for exercising: i maintain a habit of swimming for an hour 3 to 5 times a week, so the question "should I swim after work?" becomes no longer a willpower-draining conscious decision, but an automatic response.

If I were willing to put in the initial energy of choosing to start a new arbitrary habit, I'm pretty sure I could. As my mother has pointed out, in the past I've been able to accomplish pretty much everything I set my mind on (with the exception of becoming the youngest person to swim across Lake Ontario and getting into the military, but both of those plans failed for reasons pretty much outside my control.)

comment by wedrifid · 2011-05-07T16:52:54.440Z · LW(p) · GW(p)

In classical economics, all behaviors are modeled as choices. That's not true of people in practice, but possibly some people choose a higher percentage of their behaviors than other people do. Maybe it's possible to train yourself to make more of your behaviors into choices. (In fact, just learning Econ 101 made me more inclined to consciously choose my behaviors.)

Part of the modelling of everything as choices is that for their purposes they don't care whether the choice happens to be conscious or not. That is an arbitrary distinction that matters more to us for the purpose of personal development and so we can flatter each other's conscious selves by pretending they are especially important.

comment by AshwinV · 2017-06-09T09:30:59.961Z · LW(p) · GW(p)

I want to upvote this again.

Replies from: Elo
comment by Elo · 2017-06-09T09:44:52.450Z · LW(p) · GW(p)

done for you

comment by fiddlemath · 2011-10-26T08:39:03.264Z · LW(p) · GW(p)

It might be simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.

Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.

Thorough and rapid self-modification, on the basis of things you've read on a website rather than things you've seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.

And now, refutation!

So what's the solution?

To, um, what, exactly? I think the question whose solution you're describing is "What ought one do?" Of these, you say:

It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near the optimal task.

That depends largely on your moral intuitions. I honestly think of all humans as people. I am always taken aback a little when I see evidence that lots of other folks don't. You'd think I stop being surprised, but it often catches me when I'm not expecting it. I'd suggest that my intuitions about my morals when I'm planning things are actually pretty good.

That said, the salient intuitions in an emotionally-charged situation certainly are bad at planning and optimization. And so, if you imagine yourself executing your plan, I would honestly expect it to feel oddly amoral. It won't feel wrong, necessarily, but it might not feel relevant to morality at all.

It will be something you can start working on right now, immediately.

This is ... sort of true, depending on what you mean. You might need to learn more, to be able to form a more efficient or more coherent plan. You might need to sleep right now. But, yes, you can prepare to prepare to prepare to change the world right away.

It will disregard arbitrary self-limitations like abstaining from politics or keeping yourself aligned with a community of family and friends.

Staying aligned with a community of family and friends is not an arbitrary limitation. Humans are social beings. I myself am strongly introverted, but I also know that my overall mood is affected strongly by my emotional security in my social status. I can reflect on this fact, and I can mitigate its negative consequences, but it would be madness to just ignore it. In my case - and, I presume, in the case of anyone else who worries about being aligned with their family and friends - it's terrifying to imagine undermining many of those relationships.

You need people that you can trust for deep, personal conversations; and you need people who would support you if your life went suddenly wrong. You may not need these things as insurance, you may not need to use friends and family in this way, but you certainly need them for your own psychological well-being. Being depressed makes one significantly less effective at achieving one's goals, and we monkeys are depressed without close ties to other monkeys.

On the other hand, harmless-seeming deviations probably won't undermine those relationships; they're far less likely to ruin relationships than they seem. Rather, they make you a more interesting person to talk to. Still, it is a terrible idea to carelessly antagonize your closest people.

Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.

No! If we're defining a "true rationalist" as some mythical entity, then probably so. If we want to make "true rationalists" out of humans, no! If you completely disregard common social graces like the outward appearance of humility, you will have real trouble coordinating world-changing efforts. I you disregard empathy for, say, people you're talking to, you will seem rather more like a monster than a trustworthy leader. And if you ever think you're unaffected by the absurdity heuristic, you're almost certainly wrong.

People are not perfect agents, optimizing their goals. People are made out of meat. We can change what we do, reflect on what we think, and learn better how to use the brains we've got. But the vast majority of what goes on in your head is not, not, not under your control.

Which brings me to the really horrifying undercurrent of your post, which is why I stayed up an extra hour to write this comment. I mean, you can sit down and make plans for what you'll learn, what you'll do, and how you'll save billions of lives, and that's pretty awesome. I heartily approve! You can even figure out what you need to learn to decide the best courses of action, set plans to learn that, and get started immediately. Great!

But if you do all this without considering seemingly unimportant details, like having fun with friends and occasionally relaxing, then you will fail. Not only will you fail, but you will fail spectacularly. You will overstress yourself, burn out, and probably ruin your motivation to change the world. Don't go be a "rationalist" martyr, it won't work very well.

So, if you're going to decompartmentalize your global aspirations and your local life, then keep in mind that only you are likely to look out for your own well-being. That well-being has a strong effect on how effective you can be. So much so that attempting more than about 4 hours per day of real, closely-focused mental effort will probably give you not just diminishing returns, but worse efficiency per day. That said, almost nobody puts in 4 hours a day of intense focus.

So, yes, billions are miserable, people die needlessly, and the world is mad. I am still going out tomorrow night and playing board games with friends, and I do not feel guilty about this.

comment by calcsam · 2011-05-09T07:49:11.640Z · LW(p) · GW(p)

The real question is: how big of an impact can this stuff make, anyway? And how much are people able to actually implement it into their lives?

Are there any good sources of data on that? Beyond PUA, The Game, etc?

Replies from: calcsam
comment by calcsam · 2011-05-09T07:52:10.849Z · LW(p) · GW(p)

Besides, in theory we want to discuss non-Dark Arts topics...

Replies from: wedrifid
comment by wedrifid · 2011-05-09T08:47:02.949Z · LW(p) · GW(p)

There are many topics that are relevant here that some have labelled 'Dark Arts'.

comment by endoself · 2011-05-11T00:36:26.118Z · LW(p) · GW(p)

Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris.

It's Tim Ferriss.

Replies from: sanddbox
comment by sanddbox · 2013-05-26T22:39:41.288Z · LW(p) · GW(p)

Either way, the guy's a moron. He's basically a much better packaged snake oil salesman.

Replies from: CWG
comment by CWG · 2014-12-29T05:24:29.701Z · LW(p) · GW(p)

He's a very effective snake oil salesman.

comment by [deleted] · 2014-06-13T09:48:59.646Z · LW(p) · GW(p)

People don't change their sense of agency because they read a blog post.

"In alien hand syndrome, the afflicted individual’s limb will produce meaningful behaviors without the intention of the subject. The affected limb effectively demonstrates ‘a will of its own.’ The sense of agency does not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearance of the readiness potential (see section on the Neuroscience of Free Will above) recordable on the scalp several hundred milliseconds before the overt appearance of a spontaneous willed movement. Using functional magnetic resonance imaging with specialized multivariate analyses to study the temporal dimension in the activation of the cortical network associated with voluntary movement in human subjects, an anterior-to-posterior sequential activation process beginning in the supplementary motor area on the medial surface of the frontal lobe and progressing to the primary motor cortex and then to parietal cortex has been observed.[167] The sense of agency thus appears to normally emerge in conjunction with this orderly sequential network activation incorporating premotor association cortices together with primary motor cortex. In particular, the supplementary motor complex on the medial surface of the frontal lobe appears to activate prior to primary motor cortex presumably in associated with a preparatory pre-movement process. In a recent study using functional magnetic resonance imaging, alien movements were characterized by a relatively isolated activation of the primary motor cortex contralateral to the alien hand, while voluntary movements of the same body part included the concomitant activation of motor association cortex associated with the premotor process.[168] The clinical definition requires “feeling that one limb is foreign or has a will of its own, together with observable involuntary motor activity” (emphasis in original).[169] This syndrome is often a result of damage to the corpus callosum, either when it is severed to treat intractable epilepsy or due to a stroke. The standard neurological explanation is that the felt will reported by the speaking left hemisphere does not correspond with the actions performed by the non-speaking right hemisphere, thus suggesting that the two hemispheres may have independent senses of will.[170][171]

Similarly, one of the most important (“first rank”) diagnostic symptoms of schizophrenia is the delusion of being controlled by an external force.[172] People with schizophrenia will sometimes report that, although they are acting in the world, they did not initiate, or will, the particular actions they performed. This is sometimes likened to being a robot controlled by someone else. Although the neural mechanisms of schizophrenia are not yet clear, one influential hypothesis is that there is a breakdown in brain systems that compare motor commands with the feedback received from the body (known as proprioception), leading to attendant hallucinations and delusions of control.[173]