Posts

Comments

Comment by Robin_Hanson2 on Another Call to End Aid to Africa · 2009-04-04T00:30:55.000Z · LW · GW

Rather than trust any one economist, whatever her gender or heritage, I'd rather trust a betting market estimating future African GDP conditional on various aid levels.

Comment by Robin_Hanson2 on Wrong Tomorrow · 2009-04-02T15:39:28.000Z · LW · GW

The site seems to be promising to later evaluate a rather large number of widely ranging predictions. If it manages to actually keep this commitment, it will make an important contribution. The five year limit on prediction horizons is unfortunate, but of course site authors have every right to limit their effort commitment. I do suggest that they post the date that each prediction was submitted, along with the date it was made, to help observers correct for selection effects.

Comment by Robin_Hanson2 on Formative Youth · 2009-02-25T01:36:28.000Z · LW · GW

I'll admit lots of childhood experiences influenced my tastes and values, and that I don't have good reasons to expect those to be especially good tastes and values. So I will let them change to the extent I can.

Comment by Robin_Hanson2 on On Not Having an Advance Abyssal Plan · 2009-02-23T20:48:19.000Z · LW · GW

There is a vast space of possible things that can go wrong, so each plan will have to cover a pretty wide range of scenarios. Even to include a scenario as one with a plan will signal to viewers that you consider it more likely and/or important.

Comment by Robin_Hanson2 on Against Maturity · 2009-02-23T01:29:47.000Z · LW · GW

Eliezer, in most signaling theories that economists construct the observers of signals are roughly making reasonable inferences from the signals they observe. If someone proposed to us that people take feature F as signaling C, but in fact there is no relation between F and C, we would want some explanation for this incredible mistake, or at least strong evidence that such a mistake was being consistently made.

Comment by Robin_Hanson2 on Against Maturity · 2009-02-23T00:05:17.000Z · LW · GW

I'm not quite sure what you mean by "mere" signaling. If visible feature F did not correlate with hard to observe character C, then F could not signal C. Of course the correlation isn't perfect, but why doesn't it make sense to choose F if you want people to believe you have C? Are you saying you didn't really care what people thought of your maturity?

Comment by Robin_Hanson2 on Pretending to be Wise · 2009-02-22T16:59:47.000Z · LW · GW

It is functional for leaders to be more reluctant than most to "take sides" in common disputes. Our leaders do this, and so one can in fact signal high status by being "above" common disputes. Our leaders are in fact wiser than the average person, and in addition we want to say they are even wiser, so it makes sense to call people who signal high status as "wise." Furthermore, on average across human disputes with near equal support on the two sides the middle position is in fact the more correct position. So in this sense it does in fact signal wisdom to take a middle position.

Comment by Robin_Hanson2 on Good Idealistic Books are Rare · 2009-02-17T19:22:40.000Z · LW · GW

Sure if you set the idealistic-enough cut high enough then of course then only a small fraction will make the cut. But if we consider the median non-fiction library book, don't you agree it is more idealistic than cynical?

Comment by Robin_Hanson2 on Cynical About Cynicism · 2009-02-17T01:25:54.000Z · LW · GW

Quoting myself:

The cynic's conundrum is that while a cynic might prefer that others believe an idealistic theory of his cynical mood, his own cynical beliefs should lead him to believe a cynical theory of his cynical mood. That is, cynics should think that rude complainers tend to be losers, rather than altruists.

Comment by Robin_Hanson2 on An African Folktale · 2009-02-16T05:48:37.000Z · LW · GW

It bothers me that some folks complaint about the story seems to be that it is too realistic, that it too clearly shows the actual sorts of betrayal that exist in the world. Yes, perhaps they misunderstood the intent of the story, but I must take my stand with telling the truth, as opposed to "teaching" morals via telling misleading stories, where betrayal is punished more consistently than it is in reality.

Comment by Robin_Hanson2 on An African Folktale · 2009-02-16T03:57:31.000Z · LW · GW

By what process was this story selected? That could help me judge how representative is this story.

Comment by Robin_Hanson2 on An Especially Elegant Evpsych Experiment · 2009-02-13T21:24:23.000Z · LW · GW

Eliezer, our choices aren't between only the two polar opposites of only caring for the children's "own sake" vs. caring smartly for their reproductive value. Yes, the fact that our grief has not update for modern fertility patterns rejects one of those poles, but that does not imply the other pole.

Comment by Robin_Hanson2 on An Especially Elegant Evpsych Experiment · 2009-02-13T20:58:58.000Z · LW · GW

The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value. ... Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake.
This just doesn't follow. Just because there is one feature that isn't taken into account and updated optimally in grief intensity doesn't imply that nothing else is taken into account but "the childrens' own sake", whatever that means.

Comment by Robin_Hanson2 on The Evolutionary-Cognitive Boundary · 2009-02-13T13:08:57.000Z · LW · GW

Anna's point is similar to mine point that most behaviors we talk about are a mix of computation at all levels; this doesn't seem a good basis for hard lines for dichotomous cynical vs. not distinctions.

Comment by Robin_Hanson2 on The Evolutionary-Cognitive Boundary · 2009-02-12T19:03:52.000Z · LW · GW

Eliezer, wishes aren't horses; strongly wanting to be able to tell the difference doesn't by itself give us evidence to distinguish. Note that legal punishments often distinguish between conscious intention and all other internal causation; so apparently that is the distinction the law considers most relevant, and/or easiest to determine. "Optimize" invites too many knee-jerk complaints that we won't exactly optimize anything.

Comment by Robin_Hanson2 on The Evolutionary-Cognitive Boundary · 2009-02-12T18:09:29.000Z · LW · GW

Eliezer, you are right that my sense of moral approval or disapproval doesn't rely as heavily on this distinction as yours, and so I'm less eager to make this distinction. But I agree that one can sensibly distinguish genetically-encoded evolution-computed strategies from consciously brain-computed strategies from unconsciously brain-computed strategies. And I agree it would be nice to have clean terms to distinguish these, and to use those terms when we intend to speak primarily about one of these categories.

Most actions we take, however, probably have substantial contributions from all three sources, and we will often want to talk about human strategies even when we don't know much about these relative contributions. So surely we also want to have generic words that don't make this distinction, and these would probably be the most commonly used words out of these four sets.

Comment by Robin_Hanson2 on Cynicism in Ev-Psych (and Econ?) · 2009-02-12T13:50:16.000Z · LW · GW

My latest post hopefully clarifies my position here.

Comment by Robin_Hanson2 on Cynicism in Ev-Psych (and Econ?) · 2009-02-12T02:11:23.000Z · LW · GW

Eliezer, when I said "humans evolved tendencies ... to consciously believe that such actions were done for some other more noble purposes" I didn't mean that we create complex mental plans to form such mistaken beliefs. Nor am I contradicting your saying "he wants you to understand his logic puzzles"; that may well be his conscious intention.

Comment by Robin_Hanson2 on Cynicism in Ev-Psych (and Econ?) · 2009-02-11T20:19:39.000Z · LW · GW

Eliezer, you have misunderstood me if you think I typically suggest "you told yourself a self-deceiving story about virtuously loving them for their mind" or that I say "no human being was ever attracted to a mate's mind, nor ever wanted to be honest in a business transaction and not just signal honesty." I suspect we tend to talk about different levels of causation; I tend to focus on more distal causes while you focus on more proximate causes. I'm also not sure you understand what I mean by "signaling."

Comment by Robin_Hanson2 on Informers and Persuaders · 2009-02-11T03:11:11.000Z · LW · GW

Eliezer, why so reluctant to analyze an actual equilibrium, rather than first order strategies ignoring so many important effects? My claims were about real equilibrium behavior, not some hypothetical world of clueless caricatures. And why so emphasize a few "writing" experts you've read over vast numbers of teachers of writing styles in law, engineering, accounting, academia, etc.?

Comment by Robin_Hanson2 on (Moral) Truth in Fiction? · 2009-02-10T02:42:04.000Z · LW · GW

Eliezer, as I indicate in my new post, the issue isn't so much whether you the author judge that some fiction would help inform readers about morals, but whether typical readers can reasonably trust your judgment in such things, relative to the average propaganda content of authors writing apparently similar moral-quandary stories.

Comment by Robin_Hanson2 on (Moral) Truth in Fiction? · 2009-02-10T01:33:31.000Z · LW · GW

Yvian, I warned against granting near-thought virtues to fictional detail here. I doubt Uncle Tom's cabin would have persuaded many slave holders against slavery; I expect well-written well-recommended anti-slavery fiction more served to signal to readers where fashionable opinion was moving.

Comment by Robin_Hanson2 on Epilogue: Atonement (8/8) · 2009-02-07T16:03:36.000Z · LW · GW

Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I'm going to read a literature that might influence my moral beliefs, I'd rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional "argument."

Comment by Robin_Hanson2 on Value is Fragile · 2009-01-29T13:14:48.000Z · LW · GW

I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?

Comment by Robin_Hanson2 on OB Status Update · 2009-01-27T13:21:46.000Z · LW · GW

To be clear, Eliezer is developing a new website and will tentatively use his editor status here and there to promote some posts there to here; whether and how long that continues will depend on the quality and relevance of those posts.

Comment by Robin_Hanson2 on Investing for the Long Slump · 2009-01-22T15:11:33.000Z · LW · GW

People who answer survey questions seem to consistently display a pessimism bias about these large scale trends, and the equity premium puzzle also can be interpreted as people being unreasonably pessimistic about such things. So I find it hard to believe that people tend to be too optimistic about such things. If you really want to bet on the low tail of the global distribution, I guess you should listen to survivalists. If you think the US will be more down, why invest in foreign places you don't think will be so down.

Comment by Robin_Hanson2 on Failed Utopia #4-2 · 2009-01-21T13:39:18.000Z · LW · GW

You forgot to mention - two weeks later he and all other humans were in fact deliriously happy. We can see that he at this moment did not want to later be that happy, if it came at this cost. But what will he think a year or a decade later?

Comment by Robin_Hanson2 on In Praise of Boredom · 2009-01-18T12:57:33.000Z · LW · GW

Are you sure this isn't the Eliezer concept of boring, instead of the human concept? There seem to be quite a few humans who are happy to keep winning using the same approach day after day year after year. They keep getting paid well, getting social status, money, sex, etc. To the extent they want novelty it is because such novelty is a sign of social status - a new car every year, a new girl every month, a promotion every two years, etc. It is not because they expect or want to learn something from it.

Comment by Robin_Hanson2 on Getting Nearer · 2009-01-17T15:13:28.000Z · LW · GW

Thank you for the praise! I'll post soon on fiction as near vs far-thinking.

Comment by Robin_Hanson2 on Eutopia is Scary · 2009-01-12T14:04:24.000Z · LW · GW

It seems to me that you should take the surprising seductiveness of your imagined world that violated your abstract sensibilities as evidence that calls those sensibilities into question. I would have encouraged you to write the story, or at least write up a description of the story and what about it seemed seductive. I do think I have tried to describe how my best estimates of the future seem shocking to me, and that I would be out of place there in many ways.

Comment by Robin_Hanson2 on Growing Up is Hard · 2009-01-04T13:40:04.000Z · LW · GW

It seems pretty obvious that time-scaling should work - just speed up the operation of all parts in the same proportion. A good bet is probably size-scaling, adding more parts (e.g. neurons) in the same proportion in each place, and then searching in the space of different relative sizes of each place. Clearly evolution was constrained in the speed of components and in the number of parts, so there is no obvious evolutionary reason to think such changes would not be functional.

Comment by Robin_Hanson2 on Dunbar's Function · 2009-01-01T16:01:25.000Z · LW · GW

Yeah Michael, what Eliezer said.

Comment by Robin_Hanson2 on Dunbar's Function · 2008-12-31T20:26:04.000Z · LW · GW

Even if Earth ends in a century, virtually everyone in today's world is absolutely influential. Even if 200 folks do the same sort of work in the same office, they don't do the exact same work, and usually that person wouldn't be there or be paid if no one thought their work made any difference. You can even now easily identify your mark, but it is usually tedious to trace it out, and few have the patience for it.

Comment by Robin_Hanson2 on A New Day · 2008-12-31T19:32:57.000Z · LW · GW

I love it!

Comment by Robin_Hanson2 on Dunbar's Function · 2008-12-31T19:32:13.000Z · LW · GW

Virtually everyone in today's world is influential in absolute terms, and should be respected for their unique contribution. The problem is those eager to be substantially influential in percentage terms.

Comment by Robin_Hanson2 on Dunbar's Function · 2008-12-31T13:03:53.000Z · LW · GW

Yes humans are better at dealing with groups of size 7 and 50, but I don't think that has much to do with your complaint. You are basically noticing that you would probably be the alpha male in a tribe of 50, ruling all you surveyed, and wouldn't that be cool. Or in a world of 5000 people you'd be one of the top 100, and everyone would know your name, and wouldn't that be cool. Even we had better ingrown tools for dealing with larger social groups, you'd still have to face the fact that as a small creature in a vast social world, most such creatures can't expect to be very widely known or influential.

Comment by Robin_Hanson2 on Can't Unbirth a Child · 2008-12-30T01:54:21.000Z · LW · GW

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

Comment by Robin_Hanson2 on Amputation of Destiny · 2008-12-29T23:20:04.000Z · LW · GW

I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?

Comment by Robin_Hanson2 on Can't Unbirth a Child · 2008-12-28T21:41:54.000Z · LW · GW

Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we are going, or even knows much about how any particular track switch would change where we end up. They then suggest that we please please slow all this change down so we can stop and think. But that doesn't seem a remotely likely scenario to me.

Comment by Robin_Hanson2 on Nonsentient Optimizers · 2008-12-27T14:10:35.000Z · LW · GW

You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder? A friendly AI that was conscious and created conscious simulations to figure things out would still be pretty friendly overall.

Comment by Robin_Hanson2 on Nonperson Predicates · 2008-12-27T02:18:02.000Z · LW · GW

I'm having trouble distinguishing problems you think the friendly AI will have to answer from problems you think you will have to answer to build a friendly AI. Surely you don't want to have to figure out answers for every hard moral question just to build it, or why bother to build it? So why is this problem a problem you will have to figure out, vs. a problem it would figure out?

Comment by Robin_Hanson2 on Devil's Offers · 2008-12-26T13:50:21.000Z · LW · GW

Eliezer, this post seems to me to reinforce, not weaken, a "God to rule us all" image. Oh, and among the various clues that might indicate to me that someone would make a good choice with power, the ability to recreate that power from scratch does not seem a particularly strong clue.

Comment by Robin_Hanson2 on Devil's Offers · 2008-12-25T16:25:12.000Z · LW · GW

What is the point of trying to figure out what your friendly AI will choose in each standard difficult moral choice situation, if in each case the answer will be "how dare you disagree with it since it is so much smarter and more moral than you?" If the point is that your design of this AI will depend on how well various proposed designs agree with your moral intuitions in specific cases, well then the rest of us have great cause to be concerned about how much we trust your specific intuitions.

James is right; you only need one moment of "weakness" to approve a protection against all future moments of weakness, so it is not clear there is an asymmetric problem here.

Comment by Robin_Hanson2 on Harmful Options · 2008-12-25T03:28:13.000Z · LW · GW

The hard question is: who do you trust to remove your choices, and are they justified in doing so anyway even if you don't trust them to do so?

Comment by Robin_Hanson2 on Imaginary Positions · 2008-12-23T17:50:46.000Z · LW · GW

Honestly, almost everything the ordinary person thinks economists think is wrong. Which is what makes teaching intro to econ such a challenge. The main message here is to realize you don't know nearly as much as you might think about what other groups out there think, especially marginalized and colorful groups. Doubt everything you think you know about the beliefs of satanists, theologians, pedophiles, free-lovers, marxists, mobsters, futurists, UFO folk, vegans, and yes economists.

Comment by Robin_Hanson2 on Living By Your Own Strength · 2008-12-22T01:22:55.000Z · LW · GW

But how much has your intuitive revulsion at your dependence on others, your inability to do everything by yourself, biased your beliefs about what options you are likely to have. If wishes were horses you know. It is not clear what problems you can really blame on each of us not knowing everything we all know; to answer that you'd have to be clearer on what counterfactuals you are considering.

Comment by Robin_Hanson2 on Visualizing Eutopia · 2008-12-16T23:59:35.000Z · LW · GW

Marcello, I won't say any particular possible scenario isn't worth thinking about; the issue is just its relative importance.

Carl, yes of course singletons are not very unlikely. I don't think I said the other claim you attribute to me.

Comment by Robin_Hanson2 on Visualizing Eutopia · 2008-12-16T19:15:21.000Z · LW · GW

Why shouldn't we focus on working out our preferences in more detail for the scenarios we think most likely? If I think it rather unlikely that I'll have a genie who can grant three wishes, why should I work hard to figure out what those wishes would be? If we disagree about what scenarios are how likely, we will of course disagree about where preferences should be elaborated in the most detail.

Comment by Robin_Hanson2 on Not Taking Over the World · 2008-12-16T16:32:51.000Z · LW · GW

Wei, yes I meant "unlikely." Bo, you and I have very different ideas of what "logical" means. V.G., I hope you will comment more.

Comment by Robin_Hanson2 on Not Taking Over the World · 2008-12-16T01:02:39.000Z · LW · GW

Eliezer, I'd advise no sudden moves; think very carefully before doing anything. I don't know what I'd think after thinking carefully, as otherwise I wouldn't need to do it. Are you sure there isn't some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.