Posts

Comments

Comment by yvain2 on The Pascal's Wager Fallacy Fallacy · 2009-03-19T20:46:11.000Z · score: 2 (2 votes) · LW · GW

One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)

Comment by yvain2 on The Pascal's Wager Fallacy Fallacy · 2009-03-19T20:30:45.000Z · score: 4 (4 votes) · LW · GW

"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."

Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I should probably think harder before I become certain that I can make this prediction with something more complicated like my life. I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things.

"Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it."

In all of human society-space, not just the ones that have existed but every possible combination of social structures that could exist, I interpret only a vanishingly small number (the ones that contain large amounts of freedom, for example) as non-evil. Looking over all of human history, the number of societies I would have enjoyed living in are pretty minimal. I'm not just talking about Dante's Hell here. Even modern day Burma/Saudi Arabia, or Orwell's Oceania would be awful enough to make me regret not dying when I had the chance.

I don't think it's so hard to get a Singularity that leaves people alive but is still awful. If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.

"Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else."

I think that's false. In most cases I imagine, torturing people is not the terminal value of the dystopia, just something they do to people who happen to be around. In a pre-singularity dystopia, it will be a means of control and they won't have the resources to 'create' people anyway, (except the old-fashioned way). In a post-singularity dystopia, resources won't much matter and the AI's more likely to be stuck under injunctions to protect existing people than trying to create new ones (unless the problem is the Mere Addition Paradox). Though I admit it would be a very specific subset of rogue AIs that view frozen heads as "existing people".

"Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present."

I'm glad you hesitated to point it out. Luckily, I'm not as rationalist as I like to pretend :) More seriously, I currently have a lot of things preventing me from suicide. I have a family, a debt to society to pay off, and the ability to funnel enough money to various good causes to shape the future myself instead of passively experience it. And less rationally but still powerfully, I have the self-preservation urge pretty strongly that would probably kick in if I tried anything. Someday when the Singularity seems very near, I really am going to have to think about this more closely. If I think a dictator's about to succeed on an AI project, or if I've heard about the specifics of the a project's code and the moral system seems likely to collapse, I do think I'd be sitting there with a gun to my head and my finger on the trigger.

Comment by yvain2 on The Pascal's Wager Fallacy Fallacy · 2009-03-18T01:43:43.000Z · score: 7 (7 votes) · LW · GW

"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred years). I don't see any reason why the taboo on suicide must disappear. And any society advanced enough to revive me has by definition conquered death, so I can't just wait it out and die of old age. I place about 50% odds on not being able to die again after I get out.

I'm also less confident the future wouldn't be a dystopia. Even in the best case scenario the future's going to be scary through sheer cultural drift (see: legalized rape in Three Worlds Collide). I don't have to tell you that it's easier to get a Singularity that goes horribly wrong than one that goes just right, and even if we restrict the possibilities to those where I get revived instead of turned into paperclips, they could still be pretty grim (what about some well-intentioned person hard-coding in "Promote and protect human life" to an otherwise poorly designed AI, and ending up with something that resurrects the cryopreserved...and then locks them in little boxes for all eternity so they don't consume unnecessary resources.) And then there's just the standard fears of some dictator or fundamentalist theocracy, only this time armed with mind control and total surveillance so there's no chance of overthrowing them.

The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. You could change my mind if you had a utopian post-singularity society that completely mastered Fun Theory. But when I compare the horrible possibility of being forced to live forever either in a dystopia or in a world no better or worse than our own, to the good possibility of getting to live between thousand years and forever in a Fun Theory utopia that can keep me occupied...well, the former seems both more probable and more extreme.

Comment by yvain2 on Pretending to be Wise · 2009-02-22T23:24:46.000Z · score: 6 (6 votes) · LW · GW

facepalm And I even read the Sundering series before I wrote that :(

Coming up with narratives that turn the Bad Guys into Good Guys could make good practice for rationalists, along the lines of Nick Bostrom's Apostasy post. Obviously I'm not very good at it.

GeorgeNYC, very good points.

Comment by yvain2 on Fairness vs. Goodness · 2009-02-22T22:50:00.000Z · score: 5 (7 votes) · LW · GW

Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.

Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".

The worker can always cooperate, in which case he makes some money but the capitalist makes more.

Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.

(To perfect the analogy with capitalism, make two changes. First, the capitalist makes zero without the worker's cooperation. Second, the worker makes zero in all categories, and can only make money by entering into deals with the capitalist. But now it's not a Prisoner's Dilemma at all - it's the Ultimatum Game.)

IANAGT, but I bet the general rule for this class of game is that the worker's salary should depend a little on how much the capitalist can make without workers, how much the worker can make without capitalists, and what the marginal utility structure looks like - but mostly on their respective stubbornness and how much extra payoff having the worker's cooperation gives the capitalist.

In the posted example, AA's "labor" brings JW from a total of 50 to a total of 100. Perhaps if we ignore marginal utilities and they're both equally stubborn, and they both know they're both equally stubborn and so on, JW will be best off paying AA 25 for his cooperation, leading to the equal 75 - 75 distribution of wealth?

[nazgul, a warning. I think I might disagree with you about some politics. Political discussions in blogs are themselves prisoner's dilemmas. When we all cooperate and don't post about politics, we are all happy. When one person defects and talks about politics, he becomes happier because his views get aired, but those of us who disagree with him get angry. The next time you post a political comment, I may have to defect as well and start arguing with you, and then we're going to get stuck in the (D,D) doldrums.]

Comment by yvain2 on Pretending to be Wise · 2009-02-21T02:28:19.000Z · score: 6 (6 votes) · LW · GW

Darnit TGGP, you're right. Right. From now on I use Lord of the Rings for all "sometimes things really are black and white" examples. Unless anyone has some clever reason why elves are worse than Sauron.

Comment by yvain2 on Pretending to be Wise · 2009-02-20T00:32:24.000Z · score: 15 (16 votes) · LW · GW

[sorry if this is a repost; my original attempt to post this was blocked as comment spam because it had too many links to other OB posts]

I've always hated that Dante quote. The hottest place in Hell is reserved for brutal dictators, mass murderers, torturers, and people who use flamethrowers on puppies - not for the Swiss.

I came to the exact opposite conclusion when pondering the Israel-Palestinian conflict. Most of the essays I've seen in newspapers and on bulletin boards are impassioned pleas to designate one side or the other as Evildoers and the other as the Brave Heroic Resistance by citing who stole whose land first, whose atrocities were slightly less provoked, which violations of which cease-fire were dastardly betrayals and which were necessary pre-emptive actions, et cetera.

Not only is this issue so open to bias that we have little hope of getting to the truth, but I doubt there's much truth to be attained at all. Since "policy debates should not appear one-sided" and "our enemies are not innately evil", it seems pretty likely that they're two groups of people who both are doing what they honestly think is right and who both have some good points.

This isn't an attempt to run away from the problem, it's the first step toward solving the real problem. The real problem isn't "who's the hero and who's the terrorist scumbag?" it's "search solution-space for the solution that leads to the least suffering and the most peace and prosperity in the Middle East" There is a degree to which finding out who's the evildoer is useful here so we can punish them as a deterrent, but it's a pretty small degree, and the amount of energy people spend trying to determine it is completely out of proportion to the minimal gains it might produce.

And "how do we minimize suffering in the Middle East?" may be an easier question than "who's to blame?" It's about distributing land and resources to avoid people being starved or killed or oppressed, more a matter for economists and political scientists then for heated Internet debate. I've met conservatives who loathe the Palestinians and liberals who hate all Israelis who when asked supported exactly the same version of the two-state solution, but who'd never realized they agreed because they'd never gotten so far as "solution" before.

My defense of neutrality, then, would be something like this: human beings have the unfortunate tendency not to think of an issue as "finding the best solution in solution-space" but as "let's make two opposing sides at the two extremes, who both loathe each other with the burning intensity of a thousand suns". The issue then becomes "Which of these two sides is the Good and True and Beautiful, and which is Evil and Hates Our Freedom?" Thus the Democrats versus the Republicans or the Communists versus the Objectivists. I'd be terrified if any of them got one hundred percent control over policy-making. Thus, the Wise try to stay outside of these two opposing sides in order to seek the best policy solution in solution-space without being biased or distracted by the heroic us vs. them drama - and to ensure that both sides will take their proposed solution seriously without denouncing them as an other-side stooge.

A "neutral" of this sort may not care who started it, may not call one side "right" or "wrong", may claim to be above the fray, may even come up with a solution that looks like a "compromise" to both sides, but isn't abdicating judgment or responsibility.

Not that taking a side is never worth it. The Axis may have had one or two good points about the WWI reparations being unfair and such, but on the whole the balance of righteousness in WWII was so clearly on the Allies' side that the most practical way to save the world was to give the Allies all the support you could. It's always a trade-off between how ideal a solution is and how likely it is to be implemented.

Comment by yvain2 on Against Maturity · 2009-02-19T01:01:03.000Z · score: 15 (15 votes) · LW · GW

"To be concerned about being grown up, to admire the grown up because it is grown up, to blush at the suspicion of being childish; these things are the marks of childhood and adolescence. And in childhood and adolescence they are, in moderation, healthy symptoms. Young things ought to want to grow. But to carry on into middle life or even into early manhood this concern about being adult is a mark of really arrested development. When I was ten, I read fairy tales in secret and would have been ashamed if I had been found doing so. Now that I am fifty I read them openly. When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up." - C.S. Lewis

Comment by yvain2 on An Especially Elegant Evpsych Experiment · 2009-02-16T22:59:26.000Z · score: 7 (7 votes) · LW · GW

Bruce and Waldheri, you're being unfair.

You're interpreting this as "some scientists got together one day and asked Canadians about their grief just to see what would happen, then looked for things to correlate it with, and after a bunch of tries came across some numbers involving !Kung tribesmen reproductive potential that fit pretty closely, and then came up with a shaky story about why they might be linked and published it."

I interpret it as "some evolutionary psychologists were looking for a way to confirm evolutionary psychology, predicted that grief at losing children would be linked to reproductive potential in hunter-gatherer tribes, and ran an experiment to see if this was true. They discovered that it was true, and considered their theory confirmed."

I can't prove my interpretation is right because the paper is gated, but in my support, I know of many studies very similar to this one that were done specifically to confirm evo psych's predictions (for example, The Adapted Mind is full of them). And most scientists don't have enough free time to go around doing studies of people's grief for no reason and then comparing it to random data sets until they get a match, nor would journals publish it if they did. And this really is exactly the sort of elegant, testable experiment a smart person would think up if ze was looking for ways to test evolutionary theory.

It's true that correlation isn't causation and so on et cetera, but if their theory really did predict the results beforehand when other theories couldn't, we owe them a higher probability for their theory upon learning of their results.

Comment by yvain2 on (Moral) Truth in Fiction? · 2009-02-11T00:16:37.000Z · score: 2 (2 votes) · LW · GW

@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.

@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.

Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and statistical arguments linked to empirical studies probably also unpack into this category.

And then the colloquial meaning in which "valid argument" just means the same as "good point", eg "Senator Brown was implicated in a scandal" is a "valid argument" against voting for Senator Brown. You can't make a decision based on that fact alone, but you can include it in a broader decision-making process.

The problem with the second definition is that it makes "Slavery increases cotton production" a valid argument for slavery, which invites confusion. I'd rather say that the statement about cotton production is a "good point" (even better: "truthful point") and then call the cost-benefit analysis where you eventually decide "increased cotton production isn't worth the suffering, and therefore slavery is wrong" a "valid argument".

I can't really tell from the original post in which way Eliezer is using "valid argument". I assumed the first way, because he uses the phrase "valid form of argument" a few times. But re-reading the post, maybe I was premature. But here's my opinion:

Fiction isn't the first type of valid argument because there are no stated premises, no stated conclusion, and no formal structure. Or, to put it another way, on what grounds could you claim that a work of fiction was an invalid argument?

Fiction can convincingly express the second type of valid argument (good point), and this is how I think of Uncle Tom's Cabin. "Slavery is bad because slaves suffer" is a good point against slavery, and Uncle Tom's Cabin is just a very emotionally intense way of making this point that is more useful than simple assertion would be for all the reasons previously mentioned.

My complaint in my original post is that fiction tends to focus the mind on a single good point with such emotional intensity that it can completely skew the rest of the cost-benefit analysis. For example, the hypothetical sweatshop book completely focuses the mind on the good point that people can suffer terribly while working in a sweatshop. Anyone who reads the sweatshop book is in danger of having this one point become so salient that it makes a "valid argument" of the first, more formal type much more difficult.

Comment by yvain2 on (Moral) Truth in Fiction? · 2009-02-09T23:54:34.000Z · score: 10 (10 votes) · LW · GW

Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.

Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't personally come across it. And I'm sure this book would provide just as emotionally compelling an argument for banning sweatshops as Uncle Tom's Cabin did for banning slavery.

But the sweatshop issue is a whole lot more complex than that, right? And the arguments in favor of sweatshops are more difficult to put into novel form, or less popular among the people who write novels, or simply not mentioned in that particular book, or all three.

The problem with fiction as evidence is that it's like the guy who say "It was negative thirty degrees last night, worst snowstorm in fifty years, so how come them liberals are still talking about 'global warming'?". It cuts off a tiny slice of the universe and invites you to use it to judge the entire system.

But I agree that fiction is not solely a tool of the dark side. Eliezer's comment about it activating the Near mode thinking struck me as the most specifically useful sentence in the entire post, and I would like to see more on that. I would also add one other benefit: fiction drags you into the author's mindset for a while against your will. You cannot read the book about the poor girl in the sweatshops without - at least a little - cheering on the labor unions and hating the greedy bosses, and this is true no matter how good a capitalist you may be in real life. It confuses whatever part of you is usually building a protective shell of biases around your opinion, and gets you comfortable with living on the opposite side of the argument. If the other side of the argument is a more stable attractor, you might even stay there.

...that wasn't a very formal explanation, but it's the best way I can put it right now.

Comment by yvain2 on Three Worlds Decide (5/8) · 2009-02-03T12:35:22.000Z · score: 19 (14 votes) · LW · GW

Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network ...and assuming it's literally impossible to travel FTL without a starline ...and assuming the only starline to the nova star was the one they took ...and assuming Huygens, described as a "colony world", is sparsely populated, and either can be evacuated or is considered "expendable" compared to the alternatives

...then blow up Huygens' star. Without the Huygens-Nova starline, the Happy People won't be able to cross into human space, but the Happy-Nova-Babyeater starline will be unaffected. The Happy People can take care of the Babyeaters, and humankind will be safe. For a while.

Still not sure I'd actually take that solution. It depends on how populated Huygens is and how confident I am the Super Happy People can't come up with alternate transportation, and I'm also not entirely opposed to the Happy People's proposal. But:

If I had a comm link to the Happy People, I'd also want to hear their answer to the following line of reasoning: one ordinary nova in a single galaxy just attracted three separate civilizations. That means intelligent life is likely to be pretty common across the universe, and our three somewhat-united species are likely to encounter far more of it in the years to come. If the Happy People keep adjusting their (and our) utility functions each time we meet a new intelligent species, then by the millionth species there's not going to be a whole lot remaining of the original Super Happy way of thinking - or the human way of thinking, for that matter. If they're so smart, what's their plan for when that happens?

If they answer "We're fully prepared to compromise our and your utility functions limitlessly many times for the sake of achieving harmonious moralities among all forms of life in the Universe, and we predict each time will involve a change approximately as drastic as making you eat babies," then it will be a bad day to be a colonist on Huygens.

Comment by yvain2 on Building Weirdtopia · 2009-01-13T21:01:52.000Z · score: 42 (44 votes) · LW · GW

Political Weirdtopia: Citizens decide it is unfair for a democracy to count only the raw number of people who support a position without considering the intensity with which they believe it. Of course, one can't simply ask people to self-report the intensity with which they believe a position on their ballot, so stronger measures are required. Voting machines are redesigned to force voters to pull down a lever for each issue/candidate. The lever delivers a small electric shock, increasing in intensity each second the voter holds it down. The number of votes a person gets for a particular issue or candidate is a function of how long they keep holding down the lever.

In (choose one: more/less) enlightened sects of this society, the electric shock is capped at a certain level to avoid potential fatalities among overzealous voters. But in the (choose one: more/less) enlightened sects, voters can keep pulling down on the lever as long as they can stand the pain and their heart keeps working. Citizens consider this a convenient and entirely voluntary way to purge fanaticism from the gene pool.

The society lasts for several centuries before being taken over by a tiny cabal of people with Congenital Insensitivity to Pain Disorder.

Comment by yvain2 on Dunbar's Function · 2008-12-31T07:10:32.000Z · score: 1 (1 votes) · LW · GW

Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

Thank you. That's one of those insights that makes this blog worth reading.

Comment by yvain2 on Thanksgiving Prayer · 2008-11-28T14:08:47.000Z · score: 12 (12 votes) · LW · GW

"O changeless and aeternal physical constants, we give thanks to thee for existing at values such that the Universe, upon being set in motion and allowed to run for thirteen billion years, give or take an eon, naturally tends toward a state in which we are seated here tonight with turkey, mashed potatoes, and cranberry sauce in front of us."

Or "O natural selection, thou hast adapted turkeys to a mostly predation-free environment, making them slow, weak, and full of meat. In contrast, thou hast adapted us humans to an environment full of dangers and a need for complex decisions, giving us cognitive abilities that we could eventually use to discover things like iron working. Therefore we thank thee, o natural selection, that we may slaughter and consume arbitrary numbers of turkeys at our pleasure without fear of harm or retribution. Furthermore, we thank thee for giving us an instinctual sense of morality strong enough that we feel compelled to ceremonially express our gratitude to all those who have helped us over the past year, yet not so strong that we dwell too much on what's happening to the turkey when we do so. Amen."

Comment by yvain2 on Whither OB? · 2008-11-18T21:09:38.000Z · score: 0 (0 votes) · LW · GW

I don't know what's up with people who say they still haven't read the archives. When I discovered OB, I spent all my free time for two weeks reading the archives straight through :)

I support Roland's idea. A few Eliezer posts per week, plus an (official, well-publicized, Eliezer-and-Robin-supported) forum where the rest of us could discuss those posts and bring up issues of our own. Certain community leaders (hopefully Eliezer and Robin if they have time) picking out particularly interesting topics and comments on the board and telling the posters to write them up in more depth as blog posts. Even if people rejected community-based blog posting, just having a forum to keep the Overcoming Bias community together would be worthwhile.

I'm more comfortable with BBSs than complicated upvote systems like Digg or Reddit. The ones I've seen tend toward groupthink, fifty topics on the same issue, and inane "Upvote if you don't like President Bush" threads.

There are some interesting ideas floating around on preventing bulletin boards from degenerating. Require everyone use their real name, or some kind of initial investment of time or money to register an account, or have a karma system.

Kind of off-topic, but in case this is one of my last chances, I want to thank Robin and Eliezer and all the other writers. I usually only comment when I disagree with something, so it's probably not obvious, but I am in awe of the intelligence and clear thinking you display. You have changed my outlook on life, logic, and the world.

Comment by yvain2 on Selling Nonapples · 2008-11-15T02:48:20.000Z · score: 13 (13 votes) · LW · GW

I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.

A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never getting anywhere. Saying "be atheist!" provides an trivial amount of optimization power. But that doesn't mean it's of trivial importance in the search for correct beliefs. Another metaphor: if you're stuck in a ditch, the majority of the effort it takes to journey a mile will be the ten vertical meters it takes to climb to the top.

Saying "not X" doesn't make people go for all non-X equally. It makes them apply their intelligence to the problem again, ignoring the trap at X that they would otherwise fall into. If the problem is pretty easy once you stop trying to sell apples, then "sell non-apples" might provide most of the effective optimization power you need.

Comment by yvain2 on Bay Area Meetup: 11/17 8PM Menlo Park · 2008-11-13T16:24:38.000Z · score: 2 (2 votes) · LW · GW

Robin Gane-McCalla is an Overcoming Bias reader? I knew him back in college, but haven't talked to him in years. It really is a small world.

Comment by yvain2 on Hanging Out My Speaker's Shingle · 2008-11-06T15:46:00.000Z · score: 2 (1 votes) · LW · GW

"Why do people, including you apparently, always hide the price for this kind of thing? Market segmentation? Trying to get people to mentally commit before they find out how expensive it is? Maintaining a veneer of upper-class distaste for the crassness of money (or similarly, a "if you have to ask how much it is, you can't afford it" type thing)?"

I agree with that, and I have a policy of never buying from anyone who does this.

Often I don't know how much something would cost even to an order of magnitude; for example, I have no clue whether Eliezer charged Jane Street closer to $1,000 or $10,000 for his talk. This is probably because I'm not a finance company talk arranger, but I have the same problem with things that are targeted at normal people like me (vacation packages especially). I find (though I can't explain this) that I very rarely bother asking someone who provides no price information for a quote.

Even a "my base fee is $2,000, but varies based on this and this" or a "My fee is in the low four figures" would be better than "my fee is low".

Comment by yvain2 on BHTV: Jaron Lanier and Yudkowsky · 2008-11-04T00:08:00.000Z · score: 2 (2 votes) · LW · GW

Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.

At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by intelligent he means "conscious", it makes a lot of sense and he's probably even right - there's not a lot of practical value in worrying about whether an intelligent computer would be conscious (as opposed to a zombie) at this point. He wouldn't be the first person to use those two words in weird ways.

I am also at least a little sympathetic to his "consciousness can't be reduced" argument. It made more sense once he said that consciousness wasn't a phenomenon. Still not perfect sense, but trying to raise something stronger from its corpse I would argue something sort of Kantian like the following:

Goldbach's conjecture says that every number is the sum of three primes. It hasn't been proven but there's a lot of inductive evidence for it. If I give you a difficult large number, like 20145, you may not be capable of figuring out the three primes, but you should still guess they exist. Even if you work quite hard to find them and can't, it's still more likely that it's a failure on your part than that the primes don't exist.

However, North Dakota is clearly not the sum of three primes. Even someone with no mathematical knowledge can figure this out. This statement is immune to all of the inductive evidence that Goldbach's conjecture is true, immune to the criticism that you simply aren't smart enough to find the primes, and doesn't require extensive knowledge of the history and geography of North Dakota to make. It's just a simple category error.

Likewise, we have good inductive evidence that all objects follow simple scientific/reductionist laws. A difficult-to-explain object, like ball lightning, probably still follows scientific/reductionist laws, even if we haven't figured out what they are yet. But consciousness is not an object; it's the subject, that by which objects are perceived. Trying to apply rules about objects to it is a category error, and his refusal to do so is immune to the normal scientific/reductionist criticisms you would level against someone who tried that on ball lightning or homeopathy or something.

I'm not sure if I agree with this argument, but I think it's coherent and doesn't violate any laws of rationality.

I agree with everyone who found his constant appeal to "I make algorithms, so you have to believe me!" and his weird nervous laughter irritating.

Comment by yvain2 on Which Parts Are "Me"? · 2008-10-23T11:26:00.000Z · score: 7 (7 votes) · LW · GW

This is a beautiful comment thread. Too rarely do I get to hear anything at all about people's inner lives, so too much of my theory of mind is generalizations from one example.

For example, I would never have guessed any of this about reflectivity. Before reading this post, I didn't think there was such a thing as people who hadn't "crossed the Rubicon", except young children. I guess I was completely wrong.

Either I feel reflective but there's higher level of reflectivity I haven't reached and can't even imagine (which I consider unlikely but am including for purposes of fake humility), I'm misunderstanding what is meant by this post, or I've just always been reflective as far back as I can remember (6? 7?).

The only explanation I can give for that is that I've always had pretty bad obsessive-compulsive disorder which takes the form of completely irrational and inexplicable compulsions to do random things. It was really, really easy to identify those as "external" portions of my brain pestering me, so I could've just gotten in the habit of believing that about other things.

As for the original article, it would be easier to parse if I'd ever heard a good reduction of "I". Godel Escher Bach was brilliant, funny, and fascinating, but for me at least didn't dissolve this question.

Comment by yvain2 on Prices or Bindings? · 2008-10-21T20:45:42.000Z · score: 20 (18 votes) · LW · GW

I am glad Stanislav Petrov, contemplating his military oath to always obey his superiors and the appropriate guidelines, never read this post.

Comment by yvain2 on Ethical Inhibitions · 2008-10-19T23:59:40.000Z · score: 8 (8 votes) · LW · GW

"Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents "in a good cause", those who managed to hurt themselves, mostly wouldn't make the history books. (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.) Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder "for the greater good". But how many people cheated their way to actual huge altruistic benefits - cheated and actually realized the justifying greater good? Surely there must be at least one or two cases known to history - at least one king somewhere who took power by lies and assassination, and then ruled wisely and well - but I can't actually name a case off the top of my head. By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way."

History seems to me to be full of examples of people or groups successfully breaking moral rules for the greater good.

The American Revolution, for example. The Founding Fathers committed treason against the crown, started a war that killed thousands of people, and confiscated a lot of Tory property along the way. Once they were in power, they did arguably better than anyone else of their era at trying to create a just society. The Irish Revolution also started in terrorism and violence and ended in a peaceful democractic state (at least in the south); the war of Israeli independence involved a lot of terrorism on the Israeli side and ended with a democratic state that, regardless of what you think of it now, didn't show any particularly violent tendencies before acquiring Palestine in the 1967 war.

Among people who seized power violently, Augustus and Cyrus stand out as excellent in the ancient world (and I'm glad Caligula was assassinated and replaced with Claudius). Ho Chi Minh and Fidel Castro, while I disagree with their politics, were both better than their predecessors and better than many rulers who came to power by more conventional means in their parts of the world.

There are all sorts of biases that would make us less likely to believe people who "break the rules" can ever turn out well. One is the halo effect. Another is availability bias - it's much easier to remember people like Mao than it is to remember the people who were quiet and responsible once their revolution was over, and no one notices the genocides that didn't happen because of some coup or assassination. "Violence leads only to more violence" is a form of cached deep wisdom. And there's probably a false comparison effect: a post-coup government may be much better than the people they replaced while still not up to first-world standards.

And of course, "history is written by the victors". When the winners do something bad, it's never interpreted as bad after the fact. Firebombing a city to end a war more quickly, taxing a populace to give health care to the less fortunate, intervening in a foreign country's affairs to stop a genocide: they're all likely to be interpreted as evidence for "the ends don't justify the means" when they fail, but glossed over or treated as common sense interventions when they work. Consider the amount of furor raised over our supposedly good motives in going into Iraq and failing vs. the complete lack of discussion about going into Yugoslavia and succeeding.

Comment by yvain2 on The Magnitude of His Own Folly · 2008-09-30T20:16:02.000Z · score: 9 (9 votes) · LW · GW

"I need to beat my competitors" could be used as a bad excuse for taking unnecessary risks. But it is pretty important. Given that an AI you coded right now with your current incomplete knowledge of Friendliness theory is already more likely to be Friendly than that of some competitor who's never really considered the matter, you only have an incentive to keep researching Friendliness until the last possible moment when you're confident that you could still beat your competitors.

The question then becomes: what is the minimum necessary amount of Friendliness research at which point going full speed ahead has a better expected result than continuing your research? Since you've been researching for several years and sound like you don't have any plans to stop until you're absolutely satisfied, you must have a lot of contempt for all your competitors who are going full-speed ahead and could therefore be expected to beat you if any were your intellectual equals. I don't know your competitors and I wouldn't know enough AI to be able to judge them if I did, but I hope you're right.

Comment by yvain2 on 9/26 is Petrov Day · 2008-09-26T20:59:53.000Z · score: 4 (4 votes) · LW · GW

Given that full-scale nuclear war would either destroy the world or vastly reduce the number of living people, Petrov, Arkhipov, and all the other "heroic officer makes unlikely decision to avert nuclear war" stories Recovering Irrationalist describes above make a more convincing test case for the anthropic principle than an LHC breakdown or two.

Comment by yvain2 on How Many LHC Failures Is Too Many? · 2008-09-21T01:31:35.000Z · score: 1 (1 votes) · LW · GW

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

Comment by yvain2 on How Many LHC Failures Is Too Many? · 2008-09-21T00:49:53.000Z · score: 14 (10 votes) · LW · GW

Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.

Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.

On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.

On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.

On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.

Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terrorist attack. Nothing has changed.

Yvain 10 is worse off than he would have been otherwise. If not for the LHC, he would be recovering from a terrorist attack, which is bad but not apocalyptically so. Now he's dead. There's no sense in which his spirit has been averaged out over Yvains 1 through 9. He's just plain dead. That can hardly be considered an improvement.

Since it doesn't help anyone and it does kill a large number of people, I'd advise CERN against using LHC-powered anthropic tricks to "prevent" terrorism.

Comment by yvain2 on Magical Categories · 2008-08-25T19:52:44.000Z · score: 9 (8 votes) · LW · GW

IMHO, the idea that wealth can't usefully be measured is one which is not sufficiently worthwhile to merit further discussion.

The "wealth" idea sounds vulnerable to hidden complexity of wishes. Measure it in dollars and you get hyperinflation. Measure it in resources, and the AI cuts down all the trees and converts them to lumber, then kills all the animals and converts them to oil, even if technology had advanced beyond the point of needing either. Find some clever way to specify the value of all resources, convert them to products and allocate them to humans in the level humans want, and one of the products will be highly carcinogenic because the AI didn't know humans don't like that. The only way to get wealth in the way that's meaningful to humans without humans losing other things they want more than wealth is for the AI to know exactly what we want as well or better than we do. And if it knows that, we can ignore wealth and just ask it to do what it knows we want.

"The counterargument is, in part, that some classifiers are better than others, even when all of them satisfy the training data completely. The most obvious criterion to use is the complexity of the classifier."

I don't think "better" is meaningful outside the context of a utility function. Complexity isn't a utility function and it's inadequate for this purpose. Which is better, tank vs. non-tank or cloudy vs. sunny? I can't immediately see which is more complex than the other. And even if I could, I'd want my criteria to change depending on whether I'm in an anti-tank infantry or a solar power installation company, and just judging criteria by complexity doesn't let me make that change, unless I'm misunderstanding what you mean by complexity here.

Meanwhile, reading the link to Bill Hibbard on the SL4 list:

"Your scenario of a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human, is inconsistent."

I think the best possible summary of Overcoming Bias thus far would be "Abandon all thought processes even remotely related to the ones that generated this statement."

Comment by yvain2 on No License To Be Human · 2008-08-21T12:22:45.000Z · score: 21 (16 votes) · LW · GW

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can't prove it without using them. But we already know there are some things that are true but can't be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like "I hate spinach, and I'm glad I hate it, because if I liked it I'd eat it, and I don't want to eat it because I hate it." He's was making a mistake somewhere! If his belief is "spinach is bad", it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn't justify his current statement "spinach is bad". His real reason for saying "spinach is bad" is that he dislikes it. You can only choose "spinach is bad" out of belief-space based on Clarence Darrow's opinions.

One possible definition of "absolute" vs. "relative": a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

"2+2=4" is absolutely true, because it's true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. "Carrots taste bad" is relatively true, because it's true in the system "Yvain's Opinions" and I pick "Yvain's Opinions" out of belief-space only because I'm Yvain.

When Eliezer say X is "right", he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it's the one that matches what humans believe.

This does, technically, create a theory of morality that doesn't explicitly reference humans. Just like intelligent design theory doesn't explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer's system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct.

Comment by yvain2 on Probability is Subjectively Objective · 2008-08-21T12:20:40.000Z · score: 0 (0 votes) · LW · GW

...yeah, this was supposed to go in the new article, and I was just checking something in this one and accidentally posted it here. Please ignore embarrassed

Comment by yvain2 on Probability is Subjectively Objective · 2008-08-21T12:13:24.000Z · score: 4 (3 votes) · LW · GW

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can't prove it without using them. But we already know there are some things that are true but can't be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like "I hate spinach, and I'm glad I hate it, because if I liked it I'd eat it, and I don't want to eat it because I hate it." He's was making a mistake somewhere! If his belief is "spinach is bad", it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn't justify his current statement "spinach is bad". His real reason for saying "spinach is bad" is that he dislikes it. You can only choose "spinach is bad" out of belief-space based on Clarence Darrow's opinions.

One possible definition of "absolute" vs. "relative": a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

"2+2=4" is absolutely true, because it's true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. "Carrots taste bad" is relatively true, because it's true in the system "Yvain's Opinions" and I pick "Yvain's Opinions" out of belief-space only because I'm Yvain.

When Eliezer say X is "right", he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it's the one that matches what humans believe.

This does, technically, create a theory of morality that doesn't explicitly reference humans. Just like intelligent design theory doesn't explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer's system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct.

Comment by yvain2 on The Bedrock of Morality: Arbitrary? · 2008-08-16T20:55:00.000Z · score: 12 (12 votes) · LW · GW

To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.

Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observer on Earth may think it's five years since a spaceship launched, and an observer on the spaceship may think it's only been one, and each of them is correct relative to their reference frame.

We could define "time" to mean "time as it passes on Earth, where the majority of humans live." Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said "One year has passed" would be wrong; he'd really mean "One s-year has passed." Then we could say time and space weren't really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to "How much time has passed" would be "Five years."

Does that mean time isn't really relative? Or does it just mean there's a way to describe it that doesn't use the word "relative"?

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word "easy" is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it's mo wakariyasui translated as "j-easy", which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Again, it's just avoiding the word "relative" by talking in a confusing and unnatural way. And I don't see the difference between talking about "easy" vs. "j-easy" and talking about "right" vs. "p-right".

Comment by yvain2 on The Bedrock of Morality: Arbitrary? · 2008-08-16T08:54:00.000Z · score: 4 (4 votes) · LW · GW

Why "ought" vs. "p-ought" instead of "h-ought" vs. "p-ought"?

Sure, it might just be terminology. But change

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right."

to

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."

and the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.

Comment by yvain2 on Sorting Pebbles Into Correct Heaps · 2008-08-11T00:39:58.000Z · score: 12 (12 votes) · LW · GW

But that's clearly not true, except in the sense that it's "arbitrary" to prefer life over death. It's a pretty safe generalization that actions which are considered to be immoral are those which are considered to be likely to cause harm to others.

From an reproductive fitness point of view, or a what-humans-prefer point of view, there's nothing at all arbitrary about morality. Yes, it does mostly contain things that avoid harm. But from an objective point of view, "avoid harm" or "increase reproductive fitness" is as arbitrary as "make paperclips" or "pile pebbles in prime numbered heaps".

Not that there's anything wrong with that. I still would prefer living in a utopia of freedom and prosperity to being converted to paperclips, as does probably everyone else in the human race. It's just not written into the fabric of the universe that I SHOULD prefer that, or provable by an AI that doesn't already know that.

Comment by yvain2 on Sorting Pebbles Into Correct Heaps · 2008-08-10T20:39:19.000Z · score: 33 (30 votes) · LW · GW

Things I get from this:

  • Things decided by our moral system are not relative, arbitrary or meaningless, any more than it's relative, arbitrary or meaningless to say "X is a prime number"

  • Which moral system the human race uses is relative, arbitrary, and meaningless, just as there's no reason for the pebble sorters to like prime numbers instead of composite numbers, perfect numbers, or even numbers.

  • A smart AI could follow our moral system as well or better than we ourselves can, just as the Pebble-Sorters' AI can hopefully discover that they're using prime numbers and thus settle the 1957 question once and for all.

  • But it would have to "want" to first. If the Pebble-Sorters just build an AI and say "Do whatever seems right to you", it won't start making prime-numbered heaps, unless an AI made by us humans and set to "Do whatever seems right to you" would also start making prime-numbered pebble-heaps. More likely, a Pebble-Sorter AI set do "Do whatever seems right to you" would sit there inertly, or fail spectacularly.

  • So the Pebble-Sorters would be best off using something like CEV.

Comment by yvain2 on Contaminated by Optimism · 2008-08-06T02:52:57.000Z · score: 9 (6 votes) · LW · GW

This is something that's bothered me a lot about the free market. Many people, often including myself, believe that a bunch of companies which are profit-maximizers (plus some simple laws against use of force) will cause "nice" results. These people believe the effect is so strong that no possible policy directly aimed at niceness will succeed as well as the profit-maximization strategy does. There seems to be a lot of evidence for this. But it also seems too easy, as if you could take ten paper-clip maximizers competing to convert things into differently colored paperclips, and ended out with utopia. It must have something to do with capitalism including a term for the human utility function in the form of demand, but it still seems miraculous.

Comment by yvain2 on No Logical Positivist I · 2008-08-04T11:05:19.000Z · score: 3 (3 votes) · LW · GW

No, I still think there's a difference, although the omnipotence suggestion might have been an overly hasty way of explaining it. One side has moving parts, the other is just a big lump of magic.

When a statement is meaningful, we can think of an experiment that confirms it such that the experiment is also built out of meaningful statements. For example, my experiment to confirm the cake-in-the-sun is for a person on August 1 to go to the center of the sun, and see if it tastes delicious. So, IF Y is in the center of the sun, AND IF Y is there on August 1, AND IF Y perceives a sensation of deliciousness, THEN the cake-in-the-sun theory is true.

Most reasonable people will agree that "Today is August 1st" is meaningful, "This is the center of the sun" is meaningful, and "That's delicious!" is meaningful, so from those values we can calculate a meaningful value for "There's a cake in the center of the sun August 1st". If someone didn't believe that "Today is August 1st" is meaningful, we could verify it by saying "IF the calendar says 'August 1', THEN it is August 1st" in which we specify a way of testing that. If someone doesn't even agree that "The calendar says 'August 1'" is meaningful, we reduce it to "IF your sensory experience includes an image of a calendar with the page set to August 1st, THEN the calendar says 'August 1'." In this way, the cake-in-the-sun theory gets reduced to direct sensory experience.

To determine the truth value of the uncle statement, I need to see if the Absolute has an uncle. Mmmkay. So. I'll just go and....hmmmm.

If you admit that direct sensory experience is meaningful, and that statements composed of operations on meaningful statements are also meaningful, then the cake-in-the-sun theory is meaningful and the uncle theory isn't.

(I do believe that questions about the existence of an afterlife are meaningful. If I wake up an hour after dying and find myself in a lake of fire surrounded by red-skinned guys with pointy pitchforks, that's going to concentrate my probability mass on the afterlife question pretty densely to one side.)

Comment by yvain2 on No Logical Positivist I · 2008-08-04T07:12:58.000Z · score: 3 (3 votes) · LW · GW

There are different shades of positivism, and I think at least some positivists are willing to say any statement for which there is a decision procedure even possible in principle for an omnipotent being is meaningful.

Under this interpretation, as Doug S. says, the omnipotent being can travel back in time, withstand the heat of the sun, and check the status of the cake. The omnipotent being could also teleport to the spaceship past the cosmological horizon and see if it's still there or not.

However, an omnipotent being still wouldn't have a decision procedure with which to evaluate whether Shakespeare's works show signs of post-colonial alienation (although closely related questions like whether Shakespeare meant for his plays to reflect alienation could be solved by going back in time and asking him).

This sort of positivism, I think, gets the word "meaningful" exactly right.

Comment by yvain2 on The Gift We Give To Tomorrow · 2008-07-17T13:40:16.000Z · score: 6 (6 votes) · LW · GW

Wow. And this is the sort of thing you write when you're busy...

I've enjoyed these past few posts, but the part I've found most interesting are the attempts at evolutionary psychology-based explanations for things, like teenage rebellion and now flowers. Are these your own ideas, or have you taken them from some other source where they're backed up by further research? If the latter, can you tell me what the source is? I would love to read more of them (I've already read "Moral Animal", but most of these are still new to me).

Comment by yvain2 on Whither Moral Progress? · 2008-07-16T11:45:40.000Z · score: 0 (0 votes) · LW · GW

If one defines morality in a utilitarian way, in which a moral person is one who tries for the greatest possible utility of everyone in the world, that sidesteps McCarthy's complaint. In that case, the apex of moral progress is also, by definition, the world in which people are happiest on average.

It's easy to view moral progress up to this point as progress towards that ideal. Ending slavery increases ex-slaves' utility, hopefully less than it hurts ex-slaveowners. Ending cat-burning increases cats' utility, hopefully less than it hurts that of cat-burning fans.

I guess you could argue this has a hidden bias - that 19th century-ers claimed that keeping slavery was helping slaveowners more than it was hurting slaves, and that we really are in a random walk that we're justifying by fudging terms in the utility function in order to look good. But you could equally well argue that real moral progress means computing the utilities more accurately.

Since utility is by definition a Good Thing, it's less vulnerable to the Open Question argument than some other things, though I wouldn't know how to put that formally.

Comment by yvain2 on Lawrence Watt-Evans's Fiction · 2008-07-15T11:46:07.000Z · score: 1 (2 votes) · LW · GW

I second Vladimir's "Prince of Nothing" recommendation. It's a great read just as pure fantasy fiction, but it also helped me to understand some of the concepts on this blog. Reading the "chimpanzee - village idiot - Einstein" line of posts, I found myself interpreting them by sticking Anasurimbor Kelhus at the right end of the spectrum and going from there.

Comment by yvain2 on Is Morality Preference? · 2008-07-05T18:41:28.000Z · score: 4 (4 votes) · LW · GW

Subhan's explanation is coherent and believable, but he has to bite a pretty big bullet. I happen to like helping people, Hitler happens to like hurting people, and we can both condemn each other if we want but both of our likes are equally valid.

I think most people who think about morality have long realized Subhan's position is a very plausible one, but don't want to bite that bullet. Subhan's arguments confirm that the position is plausible, but they don't make the consequences any more tolerable. I realize that appeal to consequences is a fallacy and that reality doesn't necessarily have to be tolerable, but I don't feel anywhere near like the question has been "dissolved"

Comment by yvain2 on What Would You Do Without Morality? · 2008-06-30T18:58:00.000Z · score: 0 (0 votes) · LW · GW

It depends.

My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.

I don't think you could disprove the urge to care for other people, because urges don't have truth-values.

The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.

Comment by yvain2 on What Would You Do Without Morality? · 2008-06-30T08:34:00.000Z · score: 0 (0 votes) · LW · GW

It depends on how you disproved my morality.

As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.

But I don't think you could disprove the urge to care itself, since it's an urge and doesn't have a truth-value.

The only thing you could do would be what someone else here suggested - prove that all other humans are NPCs without real qualia. In that case, I'd probably act selfishly when I felt like it, unless it caused too much psychological trouble to be worth it.

Comment by yvain2 on Grasping Slippery Things · 2008-06-17T14:16:15.000Z · score: 4 (2 votes) · LW · GW

I took a different route on the "homework".

My thought was that "can" is a way of stating your strength in a given field, relative to some standard. "I can speak Chinese like a native" is saying "My strength in Chinese is equal to the standard of a native level Chinese speaker." "Congress can declare war" means "Congress' strength in the system of American government is equal to the strength needed to declare war."

Algorithmically, it would involve calculating your own strength in a field, and then calculating the minimum standard needed to do something. So an AI might examine all the Chinese dictionaries and grammars that had been programmed into it, estimate its Chinese skills, estimate the level of Chinese skills of a native speaker, and then compare them to see whether it could say "I can speak Chinese like a native."

This is different enough from Eliezer's solution and from what everyone else is talking about that I'd appreciate it if someone could critique it and tell me whether I made something into a primitive inappropriately, or, if I've missed a point, exactly which one it was and where I missed it.

Comment by yvain2 on Fake Reductionism · 2008-03-18T12:40:34.000Z · score: 14 (15 votes) · LW · GW

"But I'd agree that if a scientific understanding destroyed Keats's sense of wonder, then that was a bug in Keats"

If Keats could turn his wonder on and off like a light switch, then clearly he was being silly in withholding his wonder from science. Since science is clearly true, in order to maximize his wonder Keats should have pressed the "off" button for wonder based on ideas like rainbows being Bifrost the magic bridge to Heaven, and the "on" button for wonder based on science.

But Keats, and the rest of us, can't turn wonder on and off like that. Certain things like bridges to Heaven, or gnomes, naturally induce wonder in most people, without any special choice to take wonder in them. Certain other things like optics don't. It's not just a coincidence that there are more Lord of the Rings fanboys than Snell's Law fanboys out there. I don't know enough to say whether that's cultural or genetic, but I'm pretty sure it's not under my immediate conscious control.

Maybe with proper study of optics, some people will find it just as wonderful as they found the magic bridge Bifrost. But "With enough study, optics will become at least as wonderful as divine bridges are, and this is true for every single person on Earth regardless of variations in their personal sense of wonder" is a statement that needs proving, not a premise.

And if that statement's false, and if there are some people who really would prefer the possible world containing Bifrost to the possible world containing optics, then those people are perfectly justified in feeling sorrow that they live in the world with optics and no Bifrost. To be a good rationalist, such a person certainly has to willingly accept the scientific evidence that there is no Bifrost, but doesn't gain any extra rationality points by prancing about singing "Oh, joy, the refraction of light through water droplets in accordance with mathematical formulae is ever so much more wonderful than a magical bridge to Heaven could ever be."

Comment by yvain2 on Words as Mental Paintbrush Handles · 2008-03-02T04:11:30.000Z · score: 11 (12 votes) · LW · GW

I had a professor, David Berman, who believed some people could image well and other people couldn't. He cited studies by Galton and James in which some people completely denied they had imaginative ability, and other people were near-perfect "eidetic" imagers. Then he suggested psychological theories denying imagination were mostly developed by those who could not themselves imagine. The only online work of his I can find on the subject is http://books.google.co.jp/books?id=fZXoM80K9qgC&pg=PA13&lpg=PA13&ots=Zs03EkNZ-B&sig=2eVzzMmK7WBQnblNx2KMVpUWBnk&hl=en#PPA4,M1 pages 4-14.

My favorite thought experiment of his: Imagine a tiger. Imagine it clearly and distinctly. Got it? Now, how many black stripes does it have? (Some people thought the question was ridiculous. One person responded "Seven. Now what?")

He never formally tested his theory because he was in philosophy instead of the sciences, which is a shame. Does anyone know of any modern psychology experiment that tests variations in imaging ability?