Rationality Quotes September 2012

post by Jayson_Virissimo · 2012-09-03T05:18:17.003Z · LW · GW · Legacy · 1107 comments

Here's the new thread for posting quotes, with the usual rules:

1107 comments

Comments sorted by top scores.

comment by Ezekiel · 2012-09-01T11:27:29.987Z · LW(p) · GW(p)

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"
"It's a metaphor for the human struggle."
"I don't see how that changes my point."

Replies from: Eugine_Nier, taelor, MrCheeze
comment by Eugine_Nier · 2012-09-03T02:53:57.692Z · LW(p) · GW(p)

Well, his point only makes any sense when applied to the metaphor since a better answer to the question

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"

is:

"where would Sisyphus get a robot in the middle of Hades?"

Edit: come to think of it, this also works with the metaphor for human struggle.

Replies from: Swimmy, Alejandro1, DanielLC
comment by Swimmy · 2012-09-04T00:44:33.250Z · LW(p) · GW(p)

I thought the correct answer would be, "No time for programming, too busy pushing a boulder."

Though, since the whole thing was a punishment, I have no idea what the punishment for not doing his punishment would be. Can't find it specified anywhere.

Replies from: RobinZ, CronoDAS, MixedNuts
comment by RobinZ · 2012-09-04T01:01:39.747Z · LW(p) · GW(p)

I don't think he's punished for disobeying, I think he's compelled to act. He can think about doing something else, he can want to do something else, he can decide to do something else ... but what he does is push the boulder.

comment by CronoDAS · 2012-09-06T10:51:59.929Z · LW(p) · GW(p)

The version I like the best is that Sisyphus keeps pushing the boulder voluntarily, because he's too proud to admit that, despite all his cleverness, there's something he can't do. (Specifically, get the boulder to stay at the top of the mountain).

Replies from: Xachariah
comment by Xachariah · 2012-09-06T23:43:06.124Z · LW(p) · GW(p)

My favorite version is similar. Each day he tries to push the boulder a little higher, and as the boulder starts to slide back, he mentally notes his improvement before racing the boulder down to the bottom with a smile on his face.

Because he gets a little stronger and a little more skilled every day, and he knows that one day he'll succeed.

Replies from: gwern
comment by gwern · 2012-09-07T03:06:49.409Z · LW(p) · GW(p)

In the M. Night version: his improvements are an asymptote - and Sisyphus didn't pay enough attention in calculus class to realize that the limit is just below the peak.

Replies from: DanielLC
comment by DanielLC · 2012-09-11T02:39:09.114Z · LW(p) · GW(p)

Or maybe the limit is the peak. He still won't reach it.

comment by MixedNuts · 2012-09-04T19:31:35.715Z · LW(p) · GW(p)

In some versions he's harassed by harpies until he gets back to boulder-pushing. But RobinZ's version is better.

comment by Alejandro1 · 2012-09-03T03:01:13.926Z · LW(p) · GW(p)

Borrowing one of Hephaestus', perhaps?

Replies from: Ezekiel
comment by Ezekiel · 2012-09-03T05:54:26.056Z · LW(p) · GW(p)

Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.

comment by DanielLC · 2012-09-06T07:53:57.641Z · LW(p) · GW(p)

He can build it. It would be pretty hard to do while pushing a boulder up a hill, but he has all the time in the world!

Replies from: CCC
comment by CCC · 2012-09-06T07:59:43.852Z · LW(p) · GW(p)

Does he have any suitable raw materials?

Replies from: DanielLC
comment by DanielLC · 2012-09-06T19:02:10.695Z · LW(p) · GW(p)

It's a hill, not a mountain. It presumably has plants, which could be burned for fuel. It's possible that there's no metal in the hill, but he could make the robot out of stone. Once he gets the prototype running, he can search for better materials as it slowly pushes the rock up the hill. He just has to get back before the fuel runs out.

Replies from: MugaSofer
comment by MugaSofer · 2012-10-01T11:44:02.502Z · LW(p) · GW(p)

he could make the robot out of stone.

Of course he could.

comment by taelor · 2012-09-03T04:42:35.145Z · LW(p) · GW(p)

Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-09-03T11:04:20.458Z · LW(p) · GW(p)

Who was the guy who tried to bargain the gods into giving him immortality, only to get screwed because he hadn't thought to ask for youth and health as well? He ended up being a shriveled crab like thing in a jar.

My highschool english teacher thought this fable showed that you should be careful what you wished for. I thought it showed that trying to compel those with great power through contract was a great way to get yourself fucked good an hard. Don't think you can fuck with people a lot more powerful than you are and get away with it.

EDIT: The myth was of Tithonus. A goddess Eos was keeping him as a lover, and tried to bargain with Zeus for his immortality, without asking for eternal youth too. Ooops.

Replies from: Ezekiel, army1987
comment by Ezekiel · 2012-09-03T15:14:29.483Z · LW(p) · GW(p)

Don't think you can fuck with people a lot more powerful than you are and get away with it.

I'm no expert, but that seems to be the moral of a lot of Greek myths.

comment by A1987dM (army1987) · 2012-09-03T22:19:19.022Z · LW(p) · GW(p)

My highschool english teacher thought this fable showed that you should be careful what you wished for.

King Midas, too.

comment by MrCheeze · 2013-01-26T03:10:32.520Z · LW(p) · GW(p)

I'd say this captures the spirit of Less Wrong perfectly.

comment by Scott Alexander (Yvain) · 2012-09-01T14:20:44.838Z · LW(p) · GW(p)

Do unto others 20% better than you expect them to do unto you, to correct for subjective error.

-- Linus Pauling

Replies from: gwern, Caspian, DanielLC, Will_Newsome
comment by gwern · 2012-09-01T19:14:46.217Z · LW(p) · GW(p)

Citation for this was hard; the closest I got was Etzioni's 1962 The Hard Way to Peace, pg 110. There's also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes

I have made a modern formulation of the Golden Rule: "Do unto others 20 percent better than you would be done by - the 20 percent is to correct for subjective error."

comment by Caspian · 2012-09-03T07:15:29.960Z · LW(p) · GW(p)

Did you take "expect" to mean as in prediction, or as in what you would have them do, like the Jesus version?

comment by DanielLC · 2012-09-02T19:13:46.592Z · LW(p) · GW(p)

How about doing unto others what maximizes total happiness, regardless of what they'd do unto you?

Replies from: prase, RomanDavis, Nisan, CronoDAS, Kindly, wedrifid
comment by prase · 2012-09-02T21:46:29.960Z · LW(p) · GW(p)

The former is computationally far more feasible.

comment by RomanDavis · 2012-09-02T19:45:27.337Z · LW(p) · GW(p)

By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness.

Replies from: DanielLC, CCC
comment by DanielLC · 2012-09-02T22:50:18.765Z · LW(p) · GW(p)

Yeah, but it's not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don't just make a guess and assume you're correct.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T05:21:38.110Z · LW(p) · GW(p)

You don't think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn't really exclude that, it's just on the positive side of doing/ being done unto.

Replies from: DanielLC
comment by DanielLC · 2012-09-03T05:56:32.045Z · LW(p) · GW(p)

You should probably discourage others from hurting you. It's just not clear how much.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T06:12:04.602Z · LW(p) · GW(p)

As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.

I mean if it was practical, you'd give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.

Replies from: CCC
comment by CCC · 2012-09-04T07:57:32.365Z · LW(p) · GW(p)

...living on planet earth, giving you 80% of the crap you gave me seems about right.

Consider the consequences if everyone follows your rule. Assume someone gives you one unit of crap, possibly accidentally. You respond with 0.8 units. (It's hard to measure this precisely, but for the sake of argument let's assume that both of you manage to get it exactly right). He, in turn, responds with a further 0.64 units of crap. You respond to this with 0.512 units.

This is, of course, an infinite geometric series. The end result (over an infinite time period) is that you recieve 2 and 7/9 units of crap, while the other person recieves 2 and 2/9 units of crap. He recieves exactly 80% of the amount that you recieved, but you recieved over twice as much as you started out recieving.

If you return x% of the crap you get (for 0<x<100), and everyone else follows the same rule, then the total crap you recieve for every starting unit of crap is:

%5E2%20}%0A)

This is clearly minimized at x=0.

Replies from: RobinZ, RomanDavis
comment by RobinZ · 2012-09-04T17:13:38.822Z · LW(p) · GW(p)

Alternatively: he could notice that he gave you 1 unit of crap and assume the 0.8 units of crap you gave him is an equal penalty.

If someone yells at you, you're likely to respond - but if someone yells at you because you just pushed them, you're less likely to respond.

comment by RomanDavis · 2012-09-04T08:01:29.098Z · LW(p) · GW(p)

Or he could know I was going to give him the .512 units, from prior experience, and not give .64, which is the whole point.

Replies from: CCC
comment by CCC · 2012-09-04T08:06:29.581Z · LW(p) · GW(p)

That assumes that he is following a different rule from the rule that you are following. Does knowing that he will give you the 0.64 units prevent you from giving him the 0.8 units?

Replies from: RomanDavis
comment by RomanDavis · 2012-09-04T08:26:43.774Z · LW(p) · GW(p)

Yes. Depending on the circumstance, I might give him much less or much more and/ or choose a different course of action entirely.

comment by CCC · 2012-09-03T08:43:11.290Z · LW(p) · GW(p)

Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T09:21:10.988Z · LW(p) · GW(p)

That's a lot of effort and pain to prevent someone stepping on your toes.

Also, I'm not sure that'd be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim's family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.

Replies from: CCC
comment by CCC · 2012-09-04T08:04:58.109Z · LW(p) · GW(p)

You are correct; it is not terribly effective. However, any disproportionate response to a minor, or even an imagined, slight will reduce total unhappiness while discouraging others from hurting me.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-04T08:29:22.006Z · LW(p) · GW(p)

No. I just told you. Sometimes a disproportionate response encourages other people to hurt you. That's actually part of the rule.

comment by Nisan · 2012-09-13T12:34:17.570Z · LW(p) · GW(p)

Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling's recommendation.

Replies from: DanielLC
comment by DanielLC · 2012-09-13T17:53:22.109Z · LW(p) · GW(p)

I never mentioned causation. If you find a way to maximize it acausally, do that.

comment by CronoDAS · 2012-09-02T23:56:56.138Z · LW(p) · GW(p)

It has a tendency to go horribly wrong.

Replies from: DanielLC
comment by DanielLC · 2012-09-03T00:13:53.778Z · LW(p) · GW(p)

It's impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you'd try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you'd follow this method.

Also, linking to TVTropes tends to fall under generalizing from fictional evidence.

Replies from: CronoDAS, Luke_A_Somers
comment by CronoDAS · 2012-09-03T02:40:49.909Z · LW(p) · GW(p)

Art imitates life. ;)

And it's not hard to think of real life examples of atrocities "justified" on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.

comment by Luke_A_Somers · 2012-09-05T04:23:13.644Z · LW(p) · GW(p)

Would it help if the link were aimed at the real life section?

Replies from: zerker2000
comment by zerker2000 · 2012-09-05T20:33:53.850Z · LW(p) · GW(p)

It has been deleted to prevent edit war.

comment by Kindly · 2012-09-02T21:06:42.102Z · LW(p) · GW(p)

It's a nice sentiment, but the optimization problem you suggest is usually intractable.

Replies from: DanielLC
comment by DanielLC · 2012-09-02T22:51:15.786Z · LW(p) · GW(p)

It's better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you're not going to do well if you just find something easier to optimize.

Replies from: Kindly
comment by Kindly · 2012-09-02T23:04:22.472Z · LW(p) · GW(p)

Yes, but there's no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic.

comment by wedrifid · 2012-09-03T07:53:58.061Z · LW(p) · GW(p)

How about doing unto others what maximizes total happiness, regardless of what they'd do unto you?

You may do that if you must, I recommend against it.

Replies from: DanielLC
comment by DanielLC · 2012-09-03T17:38:39.674Z · LW(p) · GW(p)

Why do you recommend against it? Do you have a more complicated utility function?

Replies from: mfb
comment by mfb · 2012-09-04T14:15:56.751Z · LW(p) · GW(p)

Most human utility functions give their own happiness more weight than other's. If you take into account that humans increase the happiness of others because it makes themself happy, you could even say that human utility functions only care about the happiness of their corresponding humans - but that is close to a tautology ("the utility function cares about the utility of the agent only").

comment by Will_Newsome · 2012-09-01T14:22:33.706Z · LW(p) · GW(p)

That quote is really annoying because Jesus says the same thing way better, repeatedly, in the Sermon on the Mount.

Replies from: Yvain, Kindly, wedrifid
comment by Scott Alexander (Yvain) · 2012-09-01T15:21:09.400Z · LW(p) · GW(p)

Jesus used a clever quip to point out the importance of self-monitoring for illusory superiority?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-01T15:56:37.162Z · LW(p) · GW(p)

Just read the Sermon on the Mount.

comment by Kindly · 2012-09-01T15:49:00.268Z · LW(p) · GW(p)

What's the Jesus quote? (Or, I guess, one instance of it.)

Replies from: MixedNuts
comment by MixedNuts · 2012-09-01T19:42:14.073Z · LW(p) · GW(p)

On being nicer than you think you should be:

You’ve heard that it was said, "An eye in place of an eye, and a tooth in place of a tooth." But I tell you, don’t oppose someone who is evil. Whoever slaps you on the right cheek, turn the other cheek to him as well; and to whoever wants to get a judgment against you and takes your shirt, give him your coat too; and if someone forces you to go one mile, go with him two.

(Matthew 5:38-41)

On self-superiority bias:

Why do you look at the speck that’s in your brother’s eye, and don’t notice the plank that’s in your own eye?

(Matthew 7:3)

I disagree with Will's interpretation that the former follows from the latter.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-09-01T20:16:48.628Z · LW(p) · GW(p)

Sorry, yeah, I was being a little too trollish. What I meant was that it was a single step implication from combining a few parts of the Sermon on the Mount; the examples you gave are likely indeed the two most representative ones for reaching that conclusion. Out of context I agree the mote and beam exhortation isn't enough.

comment by Will_Newsome · 2012-09-02T01:32:32.757Z · LW(p) · GW(p)

(Also, I disagree with your choice of translation, but that of course doesn't matter in the scheme of things. Just felt that there was a 20% chance you'd care what I thought about that matter.)

comment by wedrifid · 2012-09-01T16:38:41.214Z · LW(p) · GW(p)

That quote is really annoying because Jesus says the same thing way better, repeatedly, in the Sermon on the Mount.

No he didn't. You are wrong about either the religious teaching you advocate or the thing that is being advocated in the grandparent.

Replies from: fubarobfusco, Will_Newsome
comment by fubarobfusco · 2012-09-01T22:24:40.350Z · LW(p) · GW(p)

Eh. What irritates me is his implicit claim that the ideas there are original or exclusive to Jesus.

With the amount of censorship, deliberate credit-stealing, and other failures of memetic replication in the ancient world, the chance is pretty slim that the earliest instance you've heard of of a moral or religious idea is actually its earliest invention. We should expect that the earliest extant sources for an idea do not correctly attribute its origin: there are so many more ways to be wrong than right, and the correct attribution of an idea (or an entertaining story, for that matter) is not under selection pressure to stay correct as the idea itself is.

(Consider that until the 1853 discovery of the Epic of Gilgamesh, a European scholar might well have believed that the story of Noah's Flood originated with the Bible. We similarly know that much of the mythos of Jesus echoes earlier salvific gods and demigods — Mithras, Dionysus, Osiris, etc. — whose cults were later suppressed as pagan.)

So thinking of the moral teachings of Jesus as originally Christian seems problematic. For instance, given the extensive contact between the Near East and India since the time of Alexander, it's reasonable to consider some contact with ideas from Buddhism, Jainism, etc. — as well as the Greek (or Greco-Egyptian) philosophy more readily recognized by Christian sources.

My point here isn't to say that Jesus was a Buddhist, of course — but rather that if we happen to observe what look like moral truths (or just moral good ideas) in one particular tradition, we shouldn't take that tradition seriously when it claims to have discovered them or to possess unique access to them.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-02T20:17:34.568Z · LW(p) · GW(p)

What irritates me is his implicit claim that the ideas there are original or exclusive to Jesus.

I don't really care about credit for originality, just beautifulness and deepness of message. Linus' take is ugly, whereas the Sermon is beautifully constructed. Just seems a shame not to go for the latter whenever possible.

Replies from: CronoDAS
comment by CronoDAS · 2012-09-02T23:54:16.972Z · LW(p) · GW(p)

Linus's take fits my aesthetic better, and "beautiful" language is often unclear.

comment by Will_Newsome · 2012-09-01T16:54:14.146Z · LW(p) · GW(p)

mote beam single step implications

Replies from: RomanDavis
comment by RomanDavis · 2012-09-01T18:12:10.748Z · LW(p) · GW(p)

What?

(Upvoted so someone can explain it without Karma cost.)

Replies from: wedrifid, MixedNuts
comment by wedrifid · 2012-09-01T20:03:49.881Z · LW(p) · GW(p)

(Upvoted so someone can explain it without Karma cost.)

Downvoted because feeding Will when he is speaking this kind of pretentious drivel is precisely the kind of thing that the cost is intended to penalize. It is an example of the system working as it should!

(Note that my own earlier reply would be penalized if I made it now and that too would be a desirable outcome. If I was confident that Will's claim about the Sermon on the Mount would be dismissed and downvoted as it has been then I would not have made a response.)

Replies from: Desrtopa
comment by Desrtopa · 2012-09-01T20:54:43.172Z · LW(p) · GW(p)

It is an example of the system working as it should!

Really, it's an example of the system backfiring,causing someone to upvote a comment that deserved a downvoting it would probably otherwise have received.

Replies from: RomanDavis, Viliam_Bur
comment by RomanDavis · 2012-09-01T22:17:36.126Z · LW(p) · GW(p)

That was my point.

comment by Viliam_Bur · 2012-09-03T08:33:42.276Z · LW(p) · GW(p)

What probability do you assign to someone with total karma less than 5 coming and translating this specific Will_Newsome's comment into intelligible speach? My estimate is: epsilon.

Breaking a rule, and explaining that it has to be done to provide an opportunity for something with epsilon probability and a very low value even if it happened... that's just an example of a person deliberately breaking a rule, and signalling dissatisfaction with the rule.

Replies from: Alicorn
comment by Alicorn · 2012-09-03T08:34:57.946Z · LW(p) · GW(p)

People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply - the set of people with karma greater than five will just be disincentivized, and that's still something.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-03T09:32:08.669Z · LW(p) · GW(p)

The prior probability of someone being able to explain negative-value Will_Newsome's comments in a way that provides value for LW readers is already epsilon. Even without the disincentives.

I think that people less responding to intentionally meaningless comments is a good thing. Therefore, trivial disincentives for doing this are a good thing. Therefore, removing them in this specific situation is a bad thing.

comment by MixedNuts · 2012-09-01T19:33:58.591Z · LW(p) · GW(p)

Will is referring to Matthew 7:1-5.

Don’t judge others, and you won’t be judged. For whatever standard you use to judge others will be used to judge you, and whatever measurement you use to measure others will be used to measure you. Why do you look at the speck that’s in your brother’s eye, and don’t notice the plank that’s in your own eye? How are you going to say to your brother, "Let me take out the speck from your eye" when you have a plank in your own eye? You hypocrite, first get rid of the plank in your own eye, and then you’ll see clearly to take out the speck from your brother’s eye.

This claims that people underestimate their flaws relative to others'. Will claims that the obvious implication is that one must judge others more leniently to compensate, rather than refraining from judgement entirely as said two sentences earlier.

Replies from: fubarobfusco, Will_Newsome
comment by fubarobfusco · 2012-09-02T04:51:41.085Z · LW(p) · GW(p)

There's also a suggestion of projection there. Having discovered that I have some flaw (say, anger; or baseless faith), I may go about finding the same fault in others — but if I correct the flaw in myself first, the world may look different. The one with road rage drives on a highway populated by assholes and maniacs; the creationist accuses Darwinism of "being a religion".

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-02T20:19:34.387Z · LW(p) · GW(p)

the creationist accuses Darwinism of "being a religion".

It is, the way they're trying to use that word. Also is intelligent design a type of creationism? 'Cuz I think I like ID, at least more than the standard model. I'd like to think of myself as a human in the reference class "creationist who accuses Darwinism of being a religion".

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-03T23:23:51.309Z · LW(p) · GW(p)

Someone who claims that faith is a good thing should not also use it as an accusation of impropriety.

The creationist does not claim — before cowans, gentiles, and the unwashed — that Darwinism is the wrong religion; rather, he claims that it is "a religion" as if to say that this is condemnation enough. To fellow creationists he may well say that Darwinism is Satanism, or a rival tribe to be vanquished by force or deception. But he does not expect that argument to fly with outsiders. With them he merely asserts that the (straw-)Darwinist is a hypocrite, a know-it-all elitist nerd who commits the grave faux-pas of mistaking his religion for science.

Meanwhile the sociologist of religion wonders where the temples of Darwin are. The strong-programme sociologist of science (who uses the methodological assumption that science doesn't work, even as he posts on the Internet!) can mistake a laboratory for a center of ritual, but one who has studied comparative religion does not see worship happening in the microscope, the genomics software, or the fMRI.

Replies from: CCC, Will_Newsome
comment by CCC · 2012-09-04T07:37:49.429Z · LW(p) · GW(p)

Someone who claims that faith is a good thing should not also use it as an accusation of impropriety.

I get the impression that that argument is used more to undermine claims that darwinism is a science than anything else.

Physics is a clear science; you can use the right equations and predict the motion of the Earth about the Sun, or the time a barometer will take to fall from a given height. This gives it a certain degree of credibility. The theory of evolution (and how the creationists love to remind everyone of that word, 'theory'!) is also science; but they would deny it, on the basis that accepting it suggests that it is as credible as physics or mathematics. If they insist that darwinism is a religion, then both alternatives start from the same basis of credibility; the creationists can then point out, quite accurately, that their version is older and has been around for longer, and therefore at least claim seniority.


There's a short story by Asimov that gives a very nice view of the whole argument.

Replies from: RobinZ
comment by RobinZ · 2012-09-04T17:15:02.442Z · LW(p) · GW(p)

That is a quintessentially Asimovian story. +1.

comment by Will_Newsome · 2012-09-04T06:52:55.931Z · LW(p) · GW(p)

Meanwhile the sociologist of religion wonders where the temples of Darwin are.

Remember that Darwinism is a lot more than biology. Sure, a computer isn't exactly an altar. That doesn't change that most of what universities are famous for in the wider world is their ideology.

comment by Will_Newsome · 2012-09-01T20:20:33.542Z · LW(p) · GW(p)

Eh that's sort of a less charitable reading of me than you could have given. But I suppose you've already walked with me 1.2 miles, and it'd be a stretch for me to ask for .8 more. ;)

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-01T22:27:57.142Z · LW(p) · GW(p)

One way we say it here is to be cautious of other-optimizing ...

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-01T22:37:21.273Z · LW(p) · GW(p)

Though sadly sometimes the only alternative is no optimization at all.

Replies from: faul_sname
comment by faul_sname · 2012-09-02T23:17:07.629Z · LW(p) · GW(p)

Yes, but more frequently than that, the only alternative appears to be no optimization at all. Hence the heuristic.

comment by Delta · 2012-09-05T13:09:15.759Z · LW(p) · GW(p)

“A writer who says that there are no truths, or that all truth is ‘merely relative,’ is asking you not to believe him. So don’t.” ― Roger Scruton, Modern Philosophy: An Introduction and Survey

Replies from: simplicio
comment by simplicio · 2012-09-08T00:17:59.775Z · LW(p) · GW(p)

I am sympathetic to this line, but Scruton's dismissal seems a little facile. If somebody says the truth is relative, then they can bite the bullet if they wish and say that THAT truth is also relative, thus avoiding the trap of self-contradiction. It might still be unwise to close your ears to them.

Consider a case where we DO agree that a given subject matter is relative; e.g., taste in ice-cream. Suppose Rosie the relativist tells you: "This ice-cream vendor's vanilla is absolutely horrible, but that's just my opinion and obviously it's relative to my own tastes." You would probably agree that Rosie's opinion is indeed "just relative"... and still give the vanilla a miss this time.

Replies from: Nisan
comment by Nisan · 2012-09-13T12:23:19.862Z · LW(p) · GW(p)

If "this vanilla ice cream is horrible" is relatively true, then "Rosie's opinion is that this vanilla ice cream is horrible" is absolutely true.

comment by Kaj_Sotala · 2012-09-01T18:08:27.416Z · LW(p) · GW(p)

The person who says, as almost everyone does say, that human life is of infinite value, not to be measured in mere material terms, is talking palpable, if popular, nonsense. If he believed that of his own life, he would never cross the street, save to visit his doctor or to earn money for things necessary to physical survival. He would eat the cheapest, most nutritious food he could find and live in one small room, saving his income for frequent visits to the best possible doctors. He would take no risks, consume no luxuries, and live a long life. If you call it living. If a man really believed that other people's lives were infinitely valuable, he would live like an ascetic, earn as much money as possible, and spend everything not absolutely necessary for survival on CARE packets, research into presently incurable diseases, and similar charities.

In fact, people who talk about the infinite value of human life do not live in either of these ways. They consume far more than they need to support life. They may well have cigarettes in their drawer and a sports car in the garage. They recognize in their actions, if not in their words, that physical survival is only one value, albeit a very important one, among many.

-- David D. Friedman, The Machinery of Freedom

Replies from: olalonde, DanielLC
comment by olalonde · 2012-09-06T10:26:29.559Z · LW(p) · GW(p)

Related:

The really important thing is not to live, but to live well. - Socrates

comment by DanielLC · 2012-09-02T20:03:58.829Z · LW(p) · GW(p)

He's just showing that those people don't give infinite value, not that it's nonsense. It's nonsense because, even if you consider life infinitely more intrinsically valuable than a green piece of paper, you'd still trade a life for green pieces of paper, so long as you could trade them back for more lives.

Replies from: RobinZ
comment by RobinZ · 2012-09-02T22:23:33.551Z · LW(p) · GW(p)

If life were of infinite value, trading a life for two new lives would be a meaningless operation - infinity times two is equal to infinity. Not unless by "life has infinite value" you actually mean "everything else is worthless".

Replies from: fiddlemath
comment by fiddlemath · 2012-09-02T22:44:58.104Z · LW(p) · GW(p)

Not quite so! We could presume that value isn't restricted to the reals + infinity, but say that something's value is a value among the ordinals. Then, you could totally say that life has infinite value, but two lives have twice that value.

But this gives non-commutativity of value. Saving a life and then getting $100 is better than getting $100 and saving a life, which I admit seems really screwy. This also violates the Von Neumann-Morgenstern axioms.

In fact, if we claim that a slice of bread is of finite value, and, say, a human life is of infinite value in any definition, then we violate the continuity axiom... which is probably a stronger counterargument, and tightly related to the point DanielLC makes above.

Replies from: The_Duck, DanielLC, benelliott
comment by The_Duck · 2012-09-06T22:17:50.701Z · LW(p) · GW(p)

If we want to assign infinite value to lives compared to slices of bread, we don't need exotic ideas like transfinite ordinals. We can just define value as an ordered pair (# of lives, # of slices of bread). When comparing values we first compare # of lives, and only use # of slices of bread as a tiebreaker. This conforms to the intuition of "life has infinite value" and still lets you care about bread without any weird order-dependence.

This still violates the continuity axiom, but that, of itself, is not an argument against a set of preferences. As I read it, claiming "life has infinite value" is an explicit rejection of the continuity axiom.

Of course, Kaj Sotala's point in the original comment was that in practice people demonstrate by their actions that they do accept the continuity axiom; that is, they are willing to trade a small risk of death in exchange for mundane benefits.

comment by DanielLC · 2012-09-02T22:55:37.101Z · LW(p) · GW(p)

You could use hyperreal numbers. They behave pretty similarly to reals, and have reals as a subset. Also, if you multiply any hyperreal number besides zero by a real number, you get something isomorphic to the reals, so you can multiply by infinity and it still will work the same.

I'm not a big fan of the continuity axiom. Also, if you allow for hyperreal probabilities, you can still get it to work.

Replies from: Decius, Eugine_Nier
comment by Decius · 2012-09-03T03:32:35.848Z · LW(p) · GW(p)

Also, if you multiply any hyperreal number besides zero by a real number, you get something isomorphic to the reals,

True

so you can multiply by infinity and it still will work the same.

Only if you have a way to describe infinity in terms of a real number.

Replies from: DanielLC
comment by DanielLC · 2012-09-03T04:35:58.001Z · LW(p) · GW(p)

Only if you have a way to describe infinity in terms of a real number.

You just pick some infinite hyper real number and multiply all the real numbers by that. What's the problem?

Replies from: Decius
comment by Decius · 2012-09-03T18:19:21.504Z · LW(p) · GW(p)

Oh, you're saying assign a hyperreal infinite numbers to the value of individual lives. That works, but be very careful how you value life. Contradictions and absurdities are trivial to develop when one aspect is permitted to override every other one.

comment by Eugine_Nier · 2012-09-03T03:03:49.205Z · LW(p) · GW(p)

You could use hyperreal numbers.

At which point why not just re-normalize everything so that you're only dealing with reals?

Replies from: DanielLC
comment by DanielLC · 2012-09-03T04:33:34.426Z · LW(p) · GW(p)

You could have something have infinite value and something else have finite value. Since this has an infinitesimal chance of actually mattering, it's a silly thing to do. I was just pointing out that you could assign something infinite utility and have it make sense.

comment by benelliott · 2012-09-06T21:00:47.407Z · LW(p) · GW(p)

But this gives non-associativity of value.

Nitpick, I think you mean non-commutativity, the ordinals are associative. The rest of your post agrees with this interpretation.

Replies from: fiddlemath
comment by fiddlemath · 2012-09-12T01:39:50.771Z · LW(p) · GW(p)

Oops, yes. Edited in original; thanks!

comment by alex_zag_al · 2012-09-06T19:56:42.399Z · LW(p) · GW(p)

There is something about practical things that knocks us off our philosophical high horses. Perhaps Heraclitus really thought he couldn't step in the same river twice. Perhaps he even received tenure for that contribution to philosophy. But suppose some other ancient had claimed to have as much right as Heraclitus did to an ox Heraclitus had bought, on the grounds that since the animal had changed, it wasn't the same one he had bought and so was up for grabs. Heraclitus would have quickly come up with some ersatz, watered-down version of identity of practical value for dealing with property rights, oxen, lyres, vineyards, and the like. And then he might have wondered if that watered-down vulgar sense of identity might be a considerably more valuable concept than a pure and philosophical sort of identity that nothing has.

John Perry, introduction to Identity, Personal Identity, and the Self

Replies from: DanielLC, mrglwrf
comment by DanielLC · 2012-09-10T05:33:12.677Z · LW(p) · GW(p)

He bought the present ox along with the future ox. He could have just bought the present ox, or at least a shorter interval of one. This is known as "renting".

Replies from: ciphergoth, CCC
comment by Paul Crowley (ciphergoth) · 2013-01-26T11:30:58.051Z · LW(p) · GW(p)

Which future ox did he buy?

Replies from: DanielLC
comment by DanielLC · 2013-01-27T01:31:57.623Z · LW(p) · GW(p)

Sorry. The future oxen.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-01-27T20:05:09.490Z · LW(p) · GW(p)

Of the many oxen at a given point in time in the future, which one did he buy?

Replies from: DanielLC
comment by DanielLC · 2013-01-27T20:37:54.417Z · LW(p) · GW(p)

Oh. I see what you mean.

He bought the ones on the future side of that worldline.

It's convenient that way, and humans are good at keeping track. He could have bought any combination of future oxen the guy owns. This has the advantage of later oxen being in the same area as earlier oxen, simplifying transportation.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-01-27T20:46:35.610Z · LW(p) · GW(p)

I'm not making myself clear. It's clear from what you say that if Heraclites bought an ox yesterday, he owns an ox today. But in order to say that he owns this particular ox, he needs a better system of identity than "you never step into the same river twice".

Replies from: DanielLC
comment by DanielLC · 2013-01-28T01:09:34.332Z · LW(p) · GW(p)

It's a sensible system for deciding how to buy and sell oxen because it minimizes shipping costs. It's a less sensible way to, for example, judge the value of a person. Should I choose Alice of Bob just because Bob is at the beginning of a world-line and Alice is not?

This does kind of come back to arguing definitions. The common idea of identity is really useful. If a philosopher thinks otherwise, he's overthinking it. "Identity" refers to something. I just don't think it's anything beyond that. You in principle could base your ethics on it, but I see no reason to. It's not as if it's something anybody can experience. If you base your anthropics on it, you'll only end up confusing yourself.

comment by CCC · 2012-09-10T06:22:40.779Z · LW(p) · GW(p)

Alternatively, he purchased the present ox using the ox-an-hour-ago as payment.

Replies from: DanielLC
comment by DanielLC · 2012-09-10T07:03:13.652Z · LW(p) · GW(p)

No. There is nobody to make that transaction with, and his past self still used the past ox, so he can't sell it.

comment by mrglwrf · 2012-09-10T21:10:05.827Z · LW(p) · GW(p)

What those eggheads need is a good ass-kicking!

A praiseworthy sentiment.

comment by katydee · 2012-09-07T02:15:02.396Z · LW(p) · GW(p)

If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart?

Solzhenitsyn

Replies from: shminux, Nisan, gjm, DanArmak
comment by shminux · 2012-09-07T18:19:58.727Z · LW(p) · GW(p)

But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart?

If only it were a line. Or even a vague boundary between clearly defined good and clearly defined evil. Or if good and evil were objectively verifiable notions.

Replies from: simplicio, beberly37
comment by simplicio · 2012-09-08T00:05:22.758Z · LW(p) · GW(p)

Or even a vague boundary between clearly defined good and clearly defined evil.

You don't think even a vague boundary can be found? To me it seems pretty self-evident by looking at extremes; e.g., torturing puppies all day is obviously worse than playing with puppies all day.

By no means am I secure in my metaethics (i.e., I may not be able to tell you in exquisite detail WHY the former is wrong). But even if you reduced my metaethics down to "whatever simplicio likes or doesn't like," I'd still be happy to persecute the puppy-torturers and happy to call them evil.

Replies from: FeepingCreature, disinter
comment by FeepingCreature · 2012-09-12T11:20:34.206Z · LW(p) · GW(p)

You don't think even a vague boundary can be found? To me it seems pretty self-evident by looking at extremes; e.g., torturing puppies all day is obviously worse than playing with puppies all day.

Animal testing.

And even enjoying torturing puppies all day is merely considered "more evil" because it's a predictor of psychopathy.

Replies from: simplicio
comment by simplicio · 2012-09-13T00:16:09.173Z · LW(p) · GW(p)

So I think maybe I leapt into this exchange uncarefully, without being clear about what I was defending. I am defending the meaningfulness & utility of a distinction between good & evil actions (not states of affairs). Note that a distinction does not require a sharp dividing line (yellow is not the same as red, but the transition is not sudden).

I also foresee a potential disagreement about meta-ethics, but that is just me "extrapolating the trend of the conversation."

Anyway, getting back to good vs evil: I am not especially strict about my use of the word "evil" but I generally use it to describe actions that (a) do a lot of harm without any comparably large benefit, AND (b) proceed from a desire to harm sentient creatures.

Seen in this light it is obvious why torturing puppies is evil, playing with them is good, and testing products on them is ethically debatable (but not evil, because of the lack of desire to harm). None of this is particularly earth-shattering as philosophical doctrine.

And even enjoying torturing puppies all day is merely considered "more evil" because it's a predictor of psychopathy.

Not if you think animals' interests count morally, which I do explicitly, and virtually everybody does implicitly.

Replies from: FeepingCreature
comment by FeepingCreature · 2012-09-13T06:52:47.554Z · LW(p) · GW(p)

I think your philosophy is probably fairly normal, it's just any attempt to simplify such things looks like an open challenge to point out corner cases. Don't take it too seriously.

Also I'm not fully convinced on whether animals' interests count morally, even though they do practically by virtue of triggering my empathy. Aside from spiders. Those can just burn. (Which is an indicator that animals only count to me because they trigger my empathy, not because I care)

Replies from: faul_sname
comment by faul_sname · 2012-09-17T17:33:35.270Z · LW(p) · GW(p)

Aside from spiders. Those can just burn.

But.. but.. they just want to give you a hug.

comment by disinter · 2012-09-10T03:02:19.325Z · LW(p) · GW(p)

You point out that there are acts easily agreed to be evil and acts easily agreed to be good, but that doesn't imply a definable boundary between good and evil. First postulate a boundary between good and evil. Now, what is necessary to refute that boundary? A clearly defined boundary would require actions that fall near the boundary to always fall to on side or the other without fail. Easily, that is not the case. Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require? A think a vague boundary requires that actions can be ranked in a vague progression from "certainly good" through "overall good, slightly evil" and descend through progressively less good zones as they approach from one side, then crossing a "evil=~good" area, into a progressively more evil side. I do not see that is necessarily the case.

Replies from: simplicio
comment by simplicio · 2012-09-13T00:54:23.805Z · LW(p) · GW(p)

Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require?

You are pointing to different actions labeled stealing and saying "one is good and the other is evil." Yeah, obviously, but that is no contradiction - they are different actions! One is the action of stealing in dire need, the other is the action of stealing without need.

This is a very common confusion. Good and evil (and ethics) are situation-dependent, even according to the sternest, most thundering of moralists. That does not tell us anything one way or the other about objectivity. The same action in the same situation with the same motives is ethically the same.

Replies from: disinter
comment by disinter · 2012-09-13T23:21:40.945Z · LW(p) · GW(p)

Thank you for pointing out my confusion. I've lost confidence that I have any idea what I'm talking about on this issue.

comment by beberly37 · 2012-09-10T16:08:06.838Z · LW(p) · GW(p)

I think the intermediate value theorem covers this. Meaning if a function has positive and negative values (good and evil) and it is continuous (I would assume a "vague boundary" or "grey area" or "goodness spectrum" to be continuous) then there must be at least one zero value. That zero value is the boundary.

Replies from: shminux
comment by shminux · 2012-09-10T16:54:51.710Z · LW(p) · GW(p)

It would indeed cover this if goodness spectrum was a regular function, not a set-valued map. Unfortunately, the same thoughts and actions can correspond to different shades of good and evil, even in the mind of the same person, let alone of different people. Often at the same time, too.

Replies from: simplicio
comment by simplicio · 2012-09-13T01:01:24.516Z · LW(p) · GW(p)

Unfortunately, the same thoughts and actions can correspond to different shades of good and evil, even in the mind of the same person [emphasis mine]

This shows that there is disagreement & confusion about what is good & what is evil. That no more proves good & evil are meaningless, than disagreement about physics shows that physics is meaningless.

Actually, disagreement tends to support the opposite conclusion. If I say fox-hunting is good and you say it's evil, although we disagree on fox-hunting, we seem to agree that only one of us can possibly be right. At the very least, we agree that only one of us can win.

comment by Nisan · 2012-09-13T12:19:09.473Z · LW(p) · GW(p)

But the line dividing Kansas and Nebraska cuts through the heart of every human being. And who is willing to grow corn on his own heart?

— Steven Kaas

comment by gjm · 2012-09-16T22:00:35.062Z · LW(p) · GW(p)

Duplicate.

comment by DanArmak · 2012-09-07T18:13:42.894Z · LW(p) · GW(p)

Clearly, the good piece of each heart is willing to destroy what it thinks of as the evil piece. It's just a question of what piece we choose to identify with as representing the "real person".

comment by lukeprog · 2012-09-09T01:36:00.187Z · LW(p) · GW(p)

A problem well stated is a problem half solved.

Charles Kettering

Replies from: thomblake, SilasBarta
comment by thomblake · 2012-09-10T20:01:58.098Z · LW(p) · GW(p)

A problem sufficiently well-stated is a problem fully solved.

comment by SilasBarta · 2012-09-09T01:41:22.555Z · LW(p) · GW(p)

Wow, I didn't even know that's a quote from someone! I had inferred that (mini)lesson from a lecture I heard, but it wasn't stated in those terms, and I never checked if someone was already known for that.

comment by David Althaus (wallowinmaya) · 2012-09-02T22:30:35.543Z · LW(p) · GW(p)

Nobody is smart enough to be wrong all the time.

Ken Wilber

Replies from: Alejandro1, MileyCyrus, Daniel_Burfoot, CronoDAS, Plubbingworth
comment by Alejandro1 · 2012-09-03T03:35:59.729Z · LW(p) · GW(p)

"But I tell you he couldn't have written such a note!" cried Flambeau. "The note is utterly wrong about the facts. And innocent or guilty, Dr Hirsch knew all about the facts."

"The man who wrote that note knew all about the facts," said his clerical companion soberly. "He could never have got 'em so wrong without knowing about 'em. You have to know an awful lot to be wrong on every subject—like the devil."

"Do you mean—?"

"I mean a man telling lies on chance would have told some of the truth," said his friend firmly. "Suppose someone sent you to find a house with a green door and a blue blind, with a front garden but no back garden, with a dog but no cat, and where they drank coffee but not tea. You would say if you found no such house that it was all made up. But I say no. I say if you found a house where the door was blue and the blind green, where there was a back garden and no front garden, where cats were common and dogs instantly shot, where tea was drunk in quarts and coffee forbidden—then you would know you had found the house. The man must have known that particular house to be so accurately inaccurate."

--G.K. Chesterton, "The Duel of Dr. Hirsch"

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T05:23:47.127Z · LW(p) · GW(p)

Reversed malevolence is intelligence?

Replies from: Viliam_Bur, fubarobfusco
comment by Viliam_Bur · 2012-09-03T08:18:07.989Z · LW(p) · GW(p)

Inverted information is not random noise.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T21:15:45.941Z · LW(p) · GW(p)

...unless you're reversing noise which is why Reverse Stupidity is not Intelligence.

comment by fubarobfusco · 2012-09-04T17:22:16.682Z · LW(p) · GW(p)

If someone tells you the opposite of the truth in order to deceive you, and you believe the opposite of what they say because you know they are deceitful, then you believe the truth. (A knave is as good as a knight to a blind bat.) The problem is, a clever liar doesn't lie all the time, but only when it matters.

Replies from: DanielLC, TheOtherDave
comment by DanielLC · 2012-09-06T08:00:03.204Z · LW(p) · GW(p)

It's more likely that they're a stupid liar than that they got it all wrong by chance.

comment by TheOtherDave · 2012-09-04T17:35:00.966Z · LW(p) · GW(p)

Another problem is that for many interesting assertions X, opposite(opposite(X)) does not necessarily equal X. Indeed, opposite(opposite(X)) frequently implies NOT X.

Replies from: Alejandro1
comment by Alejandro1 · 2012-09-05T13:42:28.534Z · LW(p) · GW(p)

Could you give an example? I would have thought this happens with Not(opposite(X)); for example, "I don't hate you" is different than "I love you", and in fact implies that I don't. But I would have thought "opposite" was symmetric, so opposite(opposite(X)) = X.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-05T15:18:34.300Z · LW(p) · GW(p)

Well, OK. So suppose (to stick with your example) I love you, and I want to deceive you about it by expressing the opposite of what I feel. So what do I say?

You seem to take for granted that opposite("I love you") = "I hate you." And not, for example, "I am indifferent to you." Or "You disgust me." Or various other assertions. And, sure, if "I love you" has a single, unambiguous opposite, and the opposite also has a single, unambiguous opposite, then my statement is false. But it's not clear to me that this is true.

If I end up saying "I'm indifferent to you" and you decide to believe the opposite of that... well, what do you believe?

Of course, simply negating the truth ("I don't love you") is unambiguously arrived at, and can be thought of as an opposite... though in practice, that's often not what I actually do when I want to deceive someone, unless I've been specifically accused of the truth. ("We're not giant purple tubes from outer space!")

comment by MileyCyrus · 2012-09-03T03:11:36.470Z · LW(p) · GW(p)

Lol, my professor would give a 100% to anyone who answered every exam question wrong. There were a couple people who pulled it off, but most scored 0<10.

Replies from: Decius
comment by Decius · 2012-09-03T03:26:56.920Z · LW(p) · GW(p)

I'm assuming a multiple-choice exam, and invalid answers don't count as 'wrong' for that purpose?

Otherwise I can easily miss the entire exam with "Tau is exactly six." or "The battle of Thermopylae" repeated for every answer. Even if the valid answers are [A;B;C;D].

Replies from: MugaSofer
comment by MugaSofer · 2012-10-01T11:51:13.815Z · LW(p) · GW(p)

"The battle of Thermopylae" repeated for every answer.

Unless it really was the battle of Thermopylae. Not having studied, you wont know.

Replies from: Decius, shokwave
comment by Decius · 2012-10-01T21:34:22.116Z · LW(p) · GW(p)

"The Battle of Thermopylae" is intended as the alternate for questions which might have "Tau is exactly six" as the answer.

For example: "What would be one consequence of a new state law which defines the ratio of a circle's circumference to diameter as exactly three?"

I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit...

Replies from: Alicorn
comment by Alicorn · 2012-10-01T21:38:50.328Z · LW(p) · GW(p)

I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit...

"Write a four word phrase or sentence."

Replies from: Decius, shminux, Jay_Schweikert
comment by Decius · 2012-10-02T01:16:42.629Z · LW(p) · GW(p)

You win.

comment by shminux · 2012-10-01T23:21:27.664Z · LW(p) · GW(p)

Judging by this and your previous evil genie comments, you'd make a lovely UFAI.

comment by Jay_Schweikert · 2012-10-01T23:14:38.133Z · LW(p) · GW(p)

I hate to break up the fun, and I'm sure we could keep going on about this, but Decius's original point was just that giving a wrong answer to an open-ended question is trivially easy. We can play word games and come up with elaborate counter-factuals, but the substance of that point is clearly correct, so maybe we should just move on.

Replies from: Decius
comment by Decius · 2012-10-02T01:43:41.117Z · LW(p) · GW(p)

That was exactly the challenge I issued. Granted, it's trivial to write an answer which is wrong for that question, but it shows that I can't find a wrong answer for an arbitrary question as easily as I thought I could.

comment by shokwave · 2012-10-01T14:29:49.435Z · LW(p) · GW(p)

"The duel between the current King of France and the former Emperor of Britain". There, the answer won't ever be that phrase.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2012-10-01T15:40:59.348Z · LW(p) · GW(p)

What if you are in a literature class and you're taking a test about a fiction book you didn't read?

comment by MugaSofer · 2012-10-10T08:03:29.862Z · LW(p) · GW(p)

Care to bet on that?

comment by Daniel_Burfoot · 2012-09-04T12:08:03.785Z · LW(p) · GW(p)

An interesting corollary of the efficient market hypothesis is that, neglecting overhead due to things like brokerage fees and assuming trades are not large enough to move the market, it should be just as difficult to lose money trading securities as it is to make money.

Replies from: Salutator
comment by Salutator · 2012-09-04T12:28:10.095Z · LW(p) · GW(p)

No, not really. In an efficient marked risks uncorrelated with those of other securities shouldn't be compensated, so you should easily be able to screw yourself over by not diversifying.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2012-09-05T00:06:08.751Z · LW(p) · GW(p)

But isn't the risk of diversifying compensated by a corresponding possibility of large reward if the sector outperforms? I wouldn't consider a strategy that produces modest losses with high probability but large gains with low probability sufficient to disprove my claim.

Replies from: Salutator
comment by Salutator · 2012-09-05T08:20:05.499Z · LW(p) · GW(p)

Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment.

The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money.

This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit.

So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk.

Now can you do worse? Yes, because you can separate two types of risk.

Some risks are highly specific to individual companies. For example, a company may be in trouble if a key employee gets hit by a beer truck. That's uncorrelated risk. Other risks affect the whole economy, like revolutions, asteroids or the boom-bust-cycle. That's correlated risk.

Diversification can insure you against uncorrelated risk, because, by definition, it's independent from the risk of other parts of your portfolio, so it's extremely unlikely for many of your diverse investments to be affected at the same time. So if everyone is properly diversified, no one actually needs to bear uncorrelated risk. In an efficient market that means it doesn't earn any compensation.

Correlated risk is not eliminated by diversification, because it is by definition the risk that affects all your diversified investments simultaneously.

So if you don't diversify you are taking on uncorrelated risk without getting paid for it. If you do that you could get a strictly better deal by taking on a correlated risk of the same magnitude which you would get payed for. And since that is what the marked is doing on average, you can get a worse deal than it does.

comment by CronoDAS · 2012-09-03T00:28:48.109Z · LW(p) · GW(p)

Unless you're a fictional character. Or possibly Mike "Bad Player" Flores:

There is an episode of Seinfeld where George—a lifelong screw up—decides to do the opposite of his natural instincts and impulses at every turn. He has a great day, lands his job at the Yankees, etc.

I was a superb Onslaught drafter, but there was probably a reason my buddy Scott had a dim confidence in my game play. So I decided to draft normally but pull a George and do the opposite of everything I was inclined to in game.

The result was a Day 2 with a terrible Sealed deck and 3-0 / 6-0 in my first draft. I needed 2-1 for Top 8.

The "Even Steven" part is that at that point I was so full of myself I forgot to do the opposite of what I wanted to do and made about three important mistakes... Exactly enough to land myself one point out of Top 8 at Grand Prix Boston (Kibler won).

Replies from: Document
comment by Document · 2012-09-09T03:11:51.005Z · LW(p) · GW(p)

I thought your first link would be Bloody Stupid Johnson.

The late (or at least severely delayed) Bergholt Stuttley Johnson was generally recognized as the worst inventor in the world, yet in a very specialized sense. Merely bad inventors made things that failed to operate. He wasn’t among these small fry. Any fool could make something that did absolutely nothing when you pressed the button. He scorned such fumble-fingered amateurs. Everything he built worked. It just didn’t do what it said on the box. If you wanted a small ground-to-air missile, you asked Johnson to design an ornamental fountain.

comment by Plubbingworth · 2012-10-01T21:41:26.673Z · LW(p) · GW(p)

This reminds me of an episode of QI, in which Johnny Vegas, who usually throws out random answers for the humor, actually managed to get a question (essentially) right.

comment by katydee · 2012-09-02T21:01:21.481Z · LW(p) · GW(p)

Lady Average may not be as good-looking as Lady Luck, but she sure as hell comes around more often.

Anonymous

Replies from: Omegaile
comment by Omegaile · 2012-09-03T20:35:56.007Z · LW(p) · GW(p)

Not always, since:

The average human has one breast and one testicle

Des McHale

In other words, the average of a distribution is not necessarily the most probable value.

Replies from: RobinZ, fubarobfusco, simplyeric, shminux
comment by RobinZ · 2012-09-03T21:03:23.909Z · LW(p) · GW(p)

In other words: expect Lady Mode), not Lady Mean.

Replies from: sketerpot, simplyeric
comment by sketerpot · 2012-09-05T21:47:03.791Z · LW(p) · GW(p)

Don't expect her, either. In Russian Roulette, the mode is that you don't die, and indeed that's the outcome for most people who play it. You should, however, expect that there's a very large chance of instadeath, and if you were to play a bunch of games in a row, that (relatively uncommon) outcome would almost certainly kill you.

(A similar principle applies to things like stock market index funds: the mode doesn't matter when all you care about is the sum of the stocks.)

The real lesson is this: always expect Lady PDF.

comment by simplyeric · 2012-09-05T17:31:56.576Z · LW(p) · GW(p)

Not to be a bore but it does say "Lady Average" not "Sir or Madam Average".

comment by fubarobfusco · 2012-09-03T23:11:29.497Z · LW(p) · GW(p)

In my high school health class, for weeks the teacher touted the upcoming event: "Breast and Testicle Day!"

When the anticipated day came, it was of course the day when all the boys go off to one room to learn about testicular self-examination, and all the girls go off to another to learn about breast self-examination. So, in fact, no student actually experienced Breast and Testicle Day.

Replies from: Plubbingworth
comment by Plubbingworth · 2012-09-04T02:30:14.277Z · LW(p) · GW(p)

Much to their chagrin, I'm assuming.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-09-05T04:25:52.365Z · LW(p) · GW(p)

Rather: chagrin and relief.

comment by simplyeric · 2012-09-05T17:30:51.954Z · LW(p) · GW(p)

Not to be a bore but it does say "Lady Average" not "Sir or Madam Average".

comment by shminux · 2012-09-04T19:59:53.825Z · LW(p) · GW(p)

Lady Main Mode? Does not sound that good. Lady Median?

Replies from: RobinZ
comment by RobinZ · 2012-09-04T22:11:48.097Z · LW(p) · GW(p)

If you're asking who comes around most often, Lady Mode it is - we can't help how it sounds.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-05T15:47:50.067Z · LW(p) · GW(p)

Lady Mode is the most fashionable.

comment by Jayson_Virissimo · 2012-09-03T08:59:50.388Z · LW(p) · GW(p)

...beliefs are like clothes. In a harsh environment, we choose our clothes mainly to be functional, i.e., to keep us safe and comfortable. But when the weather is mild, we choose our clothes mainly for their appearance, i.e., to show our figure, our creativity, and our allegiances. Similarly, when the stakes are high we may mainly want accurate beliefs to help us make good decisions. But when a belief has few direct personal consequences, we in effect mainly care about the image it helps to project.

-Robin Hanson, Human Enhancement

Replies from: zslastman, buybuydandavis
comment by zslastman · 2012-09-03T18:32:55.139Z · LW(p) · GW(p)

I feel like Hanson's admittedly insightful "signaling" hammer has him treating everything as a nail.

Replies from: Matt_Caulfield, Nominull, army1987
comment by Matt_Caulfield · 2012-09-03T22:27:36.762Z · LW(p) · GW(p)

Your contrarian stance against a high-status member of this community makes you seem formidable and savvy. Would you like to be allies with me? If yes, then the next time I go foraging I will bring you back extra fruit.

comment by Nominull · 2012-09-03T22:53:57.807Z · LW(p) · GW(p)

I agree in principle but I think this particular topic is fairly nailoid in nature.

Replies from: zslastman
comment by zslastman · 2012-09-04T10:26:50.552Z · LW(p) · GW(p)

I'd say it's such a broad subject that there have to be some screws in there as well. I think Hanson has too much faith in the ability of evolved systems to function in a radically changed environment. Even if signaling dominates the evolutionary origins of our brain, it's not advisable to just label everything we do now as directed towards signaling, any more than sex is always directed towards reproduction. You have to get into the nitty gritty of how our minds carry out the signaling. Conspiracy theorists don't signal effectively, though you can probably relate their behavior back to mechanisms originally directed towards, or at least compatible with, signaling.

Also, an ability to switch between clear "near" thinking and fluffy "far" thinking presupposes a rational decision maker to implement the switch. I'm not sure Hanson pays enough attention to how, when, and for what reasons we do this.

comment by A1987dM (army1987) · 2012-09-03T22:40:35.739Z · LW(p) · GW(p)

Same here.

comment by buybuydandavis · 2012-09-03T10:45:47.850Z · LW(p) · GW(p)

I think he's mischaracterizing the issue.

Beliefs serve multiple functions. One is modeling accuracy, another is signaling. It's not whether the environment is harsh or easy, it's which function you need. There are many harsh environments where what you need is the signaling function, and not the modeling function.

Replies from: Delta, army1987
comment by Delta · 2012-09-05T12:44:36.614Z · LW(p) · GW(p)

I think the quote reflects reality (humans aren't naturally rational so their beliefs are conditioned by circumstance), but is better seen as an observation than a recommendation. The best approach should always be to hold maximally accurate beliefs yourself, even if you choose to signal different ones as the situation demands. That way you can gain the social benefits of professing a false belief without letting it warp or distort your predictions.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-09-05T19:03:38.731Z · LW(p) · GW(p)

The best approach should always be to hold maximally accurate beliefs yourself, even if you choose to signal different ones as the situation demands.

No, that wouldn't necessarily be the case. We should expect a cost in effort and effectiveness to try to switch on the fly between the two types of truths. Lots of far truths have little direct predictive value, but lots of signaling value. Why bear the cost for a useless bit of predictive truth, particularly if it is worse than useless and hampers signaling?

That's part of the magic of magisteria - segregation of modes of truth by topic reduces that cost.

Replies from: Delta
comment by Delta · 2012-09-06T12:58:37.228Z · LW(p) · GW(p)

Hmm, maybe I shouldn't have said "always" given that acting ability is required to signal a belief you don't hold, but I do think what I suggest is the ideal. I think someone who trained themselves to do what I suggest, by studying people skills and so forth, would do better as they'd get the social benefits of conformity and without the disadvantages of false beliefs clouding predictions (though admittedly the time investment of learning these skills would have to be considered).

Short version: I think this is possible with training and would make you "win" more often, and thus it's what a rationalist would do (unless the cost of training proved prohibitive, of which I'm doubtful since these skills are very transferable).

I'm not sure what you meant by the magisteria remark, but I get the impression that advocating spiritual/long-term beliefs to less stringent standards than short term ones isn't generally seen as a good thing (see Eliezer's "Outside the Laboratory" post among others).

comment by A1987dM (army1987) · 2012-09-03T22:41:17.086Z · LW(p) · GW(p)

Beliefs serve multiple functions. One is modeling accuracy, another is signaling.

Clothes serve multiple functions. One is keeping warm, another is signalling.

comment by Jayson_Virissimo · 2012-09-01T08:25:27.976Z · LW(p) · GW(p)

Infallible, adj. Incapable of admitting error.

-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment

comment by Peter Wildeford (peter_hurford) · 2012-09-01T18:19:37.327Z · LW(p) · GW(p)

"He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his candle at mine, receives light without darkening me. No one possesses the less of an idea, because every other possesses the whole of it." - Jefferson

Replies from: Matt_Caulfield
comment by Matt_Caulfield · 2012-09-03T17:40:36.598Z · LW(p) · GW(p)

But many people do benefit greatly from hoarding or controlling the distribution of scarce information. If you make your living off slavery instead, then of course you can be generous with knowledge.

Replies from: Vaniver, CCC
comment by Vaniver · 2012-09-03T18:16:57.754Z · LW(p) · GW(p)

Or if, say, you run a university.

comment by CCC · 2012-09-05T06:35:45.594Z · LW(p) · GW(p)

If you do not hoard your ideas, and neither do I, then we can both benefit from the ideas of the other. If I can access the ideas of a hundred other people at the cost of sharing my own ideas, then I profit; no matter how smart I am, a hundred other people working the same problem are going to be able to produce at least some ideas that I did not think of. (This is a benefit of free/open source software; it has been shown experimentally to work pretty well in the right circumstances).

comment by CronoDAS · 2012-09-06T11:05:03.081Z · LW(p) · GW(p)

“The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.” This may sound like the pronouncement of some bong-smoking anarchist, but it was actually Arthur C. Clarke, who found time between scuba diving and pinball games to write “Childhood’s End” and think up communications satellites. My old colleague Ted Rall recently wrote a column proposing that we divorce income from work and give each citizen a guaranteed paycheck, which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays. The Puritans turned work into a virtue, evidently forgetting that God invented it as a punishment.

-- Tim Kreider

The interesting part is the phrase "which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays." If we can anticipate what the morality of the future would be, should we try to live by it now?

Replies from: RolfAndreassen, TheOtherDave, Richard_Kennaway, Viliam_Bur, Eugine_Nier, thomblake, DanielLC, taelor, Eugine_Nier, Thomas
comment by RolfAndreassen · 2012-09-06T16:24:08.271Z · LW(p) · GW(p)

If we can anticipate what the morality of the future would be, should we try to live by it now?

Not if it's actually the same morality, but depends on technology. For example, strong prohibitions on promiscuity are very sensible in a world without cheap and effective contraceptives. Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids. Likewise, if robots are doing all the work, fine; but right now if you just redistribute all money, no work gets done.

Replies from: army1987, shminux, Alicorn
comment by A1987dM (army1987) · 2012-09-07T00:56:57.542Z · LW(p) · GW(p)

Lack of technology was not the reason condoms weren't as widely available in 1912.

comment by shminux · 2012-09-06T18:08:34.580Z · LW(p) · GW(p)

Right idea, not a great example. People used to have lots more kids then now, most dying in childhood. Majority of women of childbearing age (gay or straight) were married and having children as often as their body allowed, so promiscuity would not have changed much. Maybe a minor correction for male infertility and sexual boredom in a standard marriage.

Replies from: RolfAndreassen, Desrtopa
comment by RolfAndreassen · 2012-09-06T19:50:38.112Z · LW(p) · GW(p)

You seem to have rather a different idea of what I meant by "2012 standards". Even now we do not really approve of married people sleeping around. We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that. Try that pattern without contraception.

Replies from: CCC
comment by CCC · 2012-09-07T07:49:29.459Z · LW(p) · GW(p)

We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that.

You might. I don't. This is most probably a cultural difference. There are people in the world to day who see nothing wrong with having multiple wives, given the ability to support them (example: Jacob Zuma)

comment by Desrtopa · 2012-09-06T18:41:11.103Z · LW(p) · GW(p)

Strong norms against promiscuity out of wedlock still made sense though, since having lots of children without a committed partner to help care for them would usually have been impractical.

comment by Alicorn · 2012-09-06T17:42:53.851Z · LW(p) · GW(p)

Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids.

Not if they were gay.

Replies from: None
comment by [deleted] · 2012-09-06T17:53:10.495Z · LW(p) · GW(p)

Then they'd just be dead, or imprisoned.

Replies from: Alicorn
comment by Alicorn · 2012-09-06T17:58:35.873Z · LW(p) · GW(p)

We're talking about morality that is based around technology. There is no technological advance that allows us to not criminalize homosexuality now where we couldn't have in the past.

Replies from: None
comment by [deleted] · 2012-09-06T18:12:21.743Z · LW(p) · GW(p)

Naming three:

  1. Condoms.
  2. Widespread circumcision.
  3. Antibiotics.
Replies from: army1987, CronoDAS, Alicorn, shminux
comment by A1987dM (army1987) · 2012-09-07T01:00:14.746Z · LW(p) · GW(p)

Widespread circumcision.

What?

Replies from: CCC
comment by CCC · 2012-09-07T07:43:14.705Z · LW(p) · GW(p)

Didn't the Jews have that back in the years BC? It's sort of cultural, but it's been around for a while in some cultures...

comment by Alicorn · 2012-09-06T18:40:13.882Z · LW(p) · GW(p)

I didn't specify promiscuous homosexuality. Monogamously inclined gay people are as protected from STDs as anyone else at a comparable tech level - maybe more so among lesbians.

Replies from: None
comment by [deleted] · 2012-09-06T19:09:02.372Z · LW(p) · GW(p)

Neither did I, but would rather refrain from explaining in detail why I didn't assume promiscuity.

It's really annoying that you jumped to that conclusion, though. Further, I'm confused why the existence of some minority of a minority of the population that doesn't satisfy the ancestor's hypothetical matters.

comment by shminux · 2012-09-06T18:26:57.849Z · LW(p) · GW(p)

Homosexuality was common/accepted/expected in many societies without leading to any negative consequences, so technology is not an enabler of morality here.

Replies from: Salemicus
comment by Salemicus · 2012-09-06T18:54:26.165Z · LW(p) · GW(p)

Homosexuality has certainly been present in many societies.

However, your link does not state, nor even suggest, that it did not lead to any negative consequences.

comment by TheOtherDave · 2012-09-06T13:11:55.530Z · LW(p) · GW(p)

How do you envision living by this model now working?
That is, suppose I were to embrace the notion that having enough resources to live a comfortable life (where money can stand in as a proxy for other resources) is something everyone ought to be guaranteed.
What ought I do differently than I'm currently doing?

Replies from: Matt_Caulfield
comment by Matt_Caulfield · 2012-09-07T00:03:23.539Z · LW(p) · GW(p)

What ought I do differently than I'm currently doing?

I would like to staple that question to the forehead of every political commentator who makes a living writing columns in far mode. What is it you would like us to do? If you don't have a good answer, why are you talking? Argh.

comment by Richard_Kennaway · 2012-09-06T12:53:24.209Z · LW(p) · GW(p)

If we can anticipate what the morality of the future would be, should we try to live by it now?

Not if the morality you anticipate coming into favour is something you disagree with. If it's something you agree with, it's already yours, and predicting it is just a way of avoiding arguing for it.

comment by Viliam_Bur · 2012-09-06T15:54:05.494Z · LW(p) · GW(p)

If you are a consequentialist, you should think about the consequences of such decision.

For example, imagine a civilization where an average person has to work nine hours to produce enough food to survive. Now the pharaoh makes a new law saying that (a) all produced food has to be distribute equally among all citizens, and (b) no one can be compelled to work more than eight hours; you can work as a volunteer, but all your produced food is redistributed equally.

What would happen is such situation? In my opinion, this would be a mass Prisoners' Dilemma where people would gradually stop cooperating (because the additional hour of work gives them epsilon benefits) and start being hungry. There would be no legal solution; people would try to make some food in their free time illegally, but the unlucky ones would simply starve and die.

The law would seem great in far mode, but its near mode consequences would be horrible. Of course, if the pharaoh is not completely insane, he would revoke the law; but there would be a lot of suffering meanwhile.

If people had "a basic human right to have enough money without having to work", situation could progress similarly. It depends on many things -- for example how much of the working people's money would you have to redistribute to non-working ones, and how much could they keep. Assuming that one's basic human right is to have $500 a month, but if you work, you can keep $3000 a month, some people could still prefer to work. But there is no guarantee it would work long-term. For example there would be a positive feedback loop -- the more people are non-working, the more votes politicians can gain by promising to increase their "basic human right income", the higher are taxes, and the smaller incentives to work. Also, it could work for the starting generation, but corrupt the next generation... imagine yourself as a high school student knowing that you will never ever have to work; how much effort would an average student give to studying, instead of e.g. internet browsing, Playstation gaming, or disco and sex? Years later, the same student will be unable to keep a job that requires education.

Also, if less people have to work, the more work is not done. For example, it will take more time to find a cure for cancer. How would you like a society where no one has to work, but if you become sick, you can't find a doctor? Yes, there would be some doctors, but not enough for the whole population, and most of them would have less education and less experience than today. You would have to pay them a lot of money, because they would be rare, and because most of the money you pay them would be paid back to state as tax, so even everything you have could be not enough motivating for them.

Replies from: Legolan, khafra, army1987, CronoDAS, jbeshir
comment by Legolan · 2012-09-06T16:44:21.226Z · LW(p) · GW(p)

Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working and being able to spend their time in other ways. I don't think we're even close to that point. I can imagine societies in a hundred years that are at that point (I have no idea whether they'll happen or not), but it would be foolish for them to condemn our lack of such a system now since we don't have the ability to support it, just as it would be foolish for us to condemn people in earlier and less well-off times for not having welfare systems as encompassing as ours.

I'd also note that issues like abolition and universal suffrage are qualitatively distinct from the issue of a minimum guaranteed income (what the quote addresses). Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles. The poorest societies cannot afford the "full unemployment" discussed in the quote, and neither can even the richest of modern societies right now (they could certainly come closer than the present, but I don't think any modern economy could survive the implementation of such a system in the present).

I do agree, however, about it being a solid goal, at least for basic amenities.

Replies from: Viliam_Bur, CCC, CronoDAS, TheOtherDave
comment by Viliam_Bur · 2012-09-07T07:05:18.714Z · LW(p) · GW(p)

Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles.

To avoid having slaves, the poorest society could decide to kill all war captives, and to let starve to death all people unable to pay their debts. Yes, this would avoid legal discrimination. Is it therefore a morally preferable solution?

Replies from: mrglwrf
comment by mrglwrf · 2012-09-10T21:41:06.356Z · LW(p) · GW(p)

Since when has the institution of slavery been a charitable one? Historically, slave-owners have payed immense costs, directly and indirectly, for the privilege of owning slaves, and done so knowingly and willingly. It is human nature to derive pleasure from holding power over others.

Replies from: Nornagest, Larks
comment by Nornagest · 2012-09-10T22:42:40.858Z · LW(p) · GW(p)

I'm not sure about those direct costs. According to my references, a male slave in 10th-century Scandinavia cost about as much as a horse, a female slave about two-thirds as much; that's a pretty good chunk of change but it doesn't seem obviously out of line with the value of labor after externalities. I don't have figures offhand for any other slaveholding cultures, but the impression I get is that the pure exercise of power was not the main determinant of value in most, if not all, of them,

comment by Larks · 2012-09-10T22:59:03.742Z · LW(p) · GW(p)

I seem to recall someone arguing that, in combat between iron age tribes, it was basically a choice between massacre and slavery - if you did neither, they would wreck revenge upon your tribe further down the line.

This wouldn't be charity, as I guess the winners did benefit from having a source of labour that didn't need to be compensated at the market rate, but it would be a case where slavery was beneficial to the victims.

(I think Carlyle was wrong about other supposed cases of slavery proving beneficial for the victims)

Replies from: FiftyTwo
comment by FiftyTwo · 2012-09-11T00:17:44.875Z · LW(p) · GW(p)

One could argue that happened in Ancient Rome, with prisoners fo war as the main source of slaves. Also they/their descendants arguably benefited in the long term from being part of the larger more sophisticated culture, if they survived that long.

comment by CCC · 2012-09-07T07:38:25.250Z · LW(p) · GW(p)

In poor societies that permit slavery, a man might be willing to sell himself into slavery. He gets food and lodging, possibly for his family as well as himself; his new purchaser gets a whole lot of labour. There's a certain loss of status, but a person might well be willing to live with that in order to avoid starvation.

comment by CronoDAS · 2012-09-06T20:17:49.518Z · LW(p) · GW(p)

I'd also note that issues like abolition and universal suffrage are qualitatively distinct from the issue of a minimum guaranteed income (what the quote addresses). Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles.

Elections can take quite a bit of resources to run when you have a large voting population...

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-06T22:58:35.777Z · LW(p) · GW(p)

No, politicians can afford to spend lots of money on them. The actual mechanism of elections have never, so far as I know, been all that expensive pre-computation.

Replies from: Randaly, army1987
comment by Randaly · 2012-09-07T10:13:47.237Z · LW(p) · GW(p)

IAWYC, but the claims that most of the economic costs of elections are in political spending, and most of the costs of actually running elections are in voting machines are both probably wrong. (Public data is terrible, so I'm crudely extrapolating all of this from local to national levels.)

The opportunity costs of voting alone dwarf spending on election campaigns. Assuming that all states have the same share of GDP, that people who don't a full-state holiday to vote take an hour off to vote, that people work 260 days a year and 8 hours a day, and that nobody in the holiday states do work, then we get:

Political spending: 5.3 billion USD Opportunity costs of elections: 15 trillion USD (US GDP) (9/50 (states with voting holidays) 1/260 (percentage of work-time lost) + 41/50 (states without holidays) 1/60 1/8 (percentage of work-time lost)) ≈ 16 billion USD

Extrapolating from New York City figures, election machines cost ~1.9 billion nationwide. (50 million for a population ~38 times smaller than the total US population.) and extrapolating Oakland County's 650,000 USD cost across the US's 3143 counties, direct costs are just over 2 billion USD. (This is for a single election; however, some states have had as many as 5 elections in a single year. The cost of the voting machines can be amortized over multiple elections in multiple years.)

(If you add together the opportunity costs for holding one general and one non-general election a year (no state holidays; around ~7 billion USD), plus the costs of actually running them, plus half the cost of the campaign money, the total cost/election seems to be around 30 billion USD, or ~0.002% of the US's GDP.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-07T10:26:33.991Z · LW(p) · GW(p)

Correction accepted. Still seems like something a poor society could afford, though, since labor and opportunity would also cost less. I understand that lots of poor societies do.

comment by A1987dM (army1987) · 2012-09-07T08:12:44.248Z · LW(p) · GW(p)

The actual mechanism of elections have never, so far as I know, been all that expensive pre-computation.

What? If anything I'd assume them to be more expensive before computers were introduced. In Italy where they are still paper based they have to hire people to count the ballots (and they have to pay them a lot, given that they select people at random and you're not allowed to refuse unless you are ill or something).

Replies from: Tripitaka, ArisKatsaris, DanArmak, Peterdjones
comment by Tripitaka · 2012-09-07T08:53:52.058Z · LW(p) · GW(p)

According to Wikipedia, the 2005 elections in germany did cost 63 million euros, with a population of 81 million people. 0,78 eurocent per person or the 0,00000281st part of the GDP. Does not seem much, in the grander scheme of things. And since the german constitutional court prohibited the use of most types of voting machines, that figure does include the cost to the helpers; 13 million, again, not a prohibitive expenditure.

comment by ArisKatsaris · 2012-09-07T08:38:56.318Z · LW(p) · GW(p)

http://aceproject.org/ace-en/focus/core/crb/crb03

"Low electoral costs, approximately $1 to $3 per elector, tend to manifest in countries with longer electoral experience"

. In Italy where they are still paper based they have to hire people to count the ballots (and they have to pay them a lot, given that they select people at random and you're not allowed to refuse unless you are ill or something

That's a somewhat confusing comment. If they're effectively conscripted (them not being allowed to refuse), not really "hired" -- that would imply they don't need to be paid a lot...

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-07T22:13:35.624Z · LW(p) · GW(p)

approximately $1 to $3 per elector

Is that that little? I think many fewer people would vote if they had to pay $3 out of their own pocket in order to do so.

If they're effectively conscripted (them not being allowed to refuse), not really "hired" -- that would imply they don't need to be paid a lot...

A law compelling people to do stuff would be very unpopular, unless they get adequate compensation. Not paying them much would just mean they would feign illness or something. (If they didn't select people by lot, the people doing that would be the ones applying for that job, who would presumably like it more than the rest of the population and hence be willing to do that for less.)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-07T22:26:12.711Z · LW(p) · GW(p)

I think many fewer people would vote if they had to pay $3 out of their own pocket in order to do so.

Well perhaps fewer people would vote if they had to pay a single cent out of their own pocket -- would that mean that 0.01$ isn't little either?

A law compelling people to do stuff would be very unpopular, unless they get adequate compensation.

How much are these Italian ballot-counters being paid? Can we quantify this?

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-07T22:31:05.266Z · LW(p) · GW(p)

IIRC, something like €150 per election. I'll look for the actual figure.

comment by DanArmak · 2012-09-07T12:26:41.998Z · LW(p) · GW(p)

they have to pay them a lot, given that they select people at random and you're not allowed to refuse unless you are ill or something

Why so? Usually when people can't refuse to do a job, they're paid little, not a lot.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-07T12:46:06.238Z · LW(p) · GW(p)

Like jury duty. Yeah. Why would it be different in Greece?

comment by Peterdjones · 2012-09-07T13:07:23.635Z · LW(p) · GW(p)

In the UK, the counters are volunteers.

comment by TheOtherDave · 2012-09-06T21:05:33.136Z · LW(p) · GW(p)

Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working

Well, yes. Almost tautologically so, I should think.
The tricky part is working out when humans are better off.

comment by khafra · 2012-09-07T19:46:35.412Z · LW(p) · GW(p)

If you are a bayesian, you should think about how much evidence your imagination constitutes.

For example, imagine a civilization where an average person gains little or no total productivity by working over 8 hour per day. Imagine, moreover, that in this civilization, working 10 hours a day doubles your risk of coronary heart disease, the leading cause of death in this civilization. Finally, imagine that, in this civilization, a common way for workers to signal their dedication to their jobs is by staying at work long hours, regardless of the harm it does both to their company and themselves.

In this civilization, a law preventing individuals from working over 8 hours per day is a tremendous social good.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-08T09:57:24.553Z · LW(p) · GW(p)

Work hour skepticism leaves out the question of the cost of mistakes. It's one thing to have a higher proportion of defective widgets on an assembly line (though even that can matter, especially if you want a reputation for high quality products), another if the serious injury rate goes up, and a third if you end up with the Exxon Valdez.

comment by A1987dM (army1987) · 2012-09-07T01:08:46.063Z · LW(p) · GW(p)

the higher are taxes, and the smaller incentives to work

You mean “incentives to fully report your income”, right? ;-) (There are countries where a sizeable fraction of the economy is underground. I come from one.)

how much effort would an average student give to studying, instead of e.g. internet browsing, Playstation gaming, or disco and sex?

The same they give today. Students not interested in studying mostly just cheat.

comment by CronoDAS · 2012-09-06T20:01:57.649Z · LW(p) · GW(p)

Well, if your society isn't rich enough, you just do what you can. (And a lot of work really isn't all that important; would it be that big of a disaster if your local store carried fewer kinds of cosmetics, or if your local restaurant had trouble hiring waiters?)

See also.

comment by jbeshir · 2012-09-06T20:49:29.907Z · LW(p) · GW(p)

It is true that in the long run, things could work out worse with a guarantee of sufficient food/supplies for everyone. I think, though, that this post answers the wrong question; the question to answer in order to compare consequences is how probable it is to be better or worse, and by what amounts. Showing that it "could" be worse merely answers the question "can I justify holding this belief" rather than the question "what belief should I hold". The potential benefits of a world where people are guaranteed food seem quite high on the face of it, so it is a question well worth asking seriously... or would be if one were in a position to actually do anything about it, anyway.

Prisoners' dilemmas amongst humans with reputation and social pressure effects do not reliably work out with consistent defection, and models of societies (and students*) can easily predict almost any result by varying the factors they model and how they do so, and so contribute very little evidence in the absence of other evidence that they generate accurate predictions.

The only reliable information that I am aware of is that we know that states making such guarantees can exist for multiple generations with no obvious signs of failure, at least with the right starting conditions, because we have such states existing in the world today. The welfare systems of some European countries have worked this way for quite a long time, and while some are doing poorly economically, others are doing comparably well.

I think that it is worth assessing the consequences of deciding to live by the idea of universal availability of supplies, but they are not so straightforwardly likely to be dire as this post suggests, requiring a longer analysis.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-07T08:06:28.425Z · LW(p) · GW(p)

As I wrote, it depends on many things. I can imagine a situation where this would work; I can also imagine a situation where it would not. As I also wrote, I can imagine such system functioning well if people who don't work get enough money to survive, but people who do work get significantly more.

Data point: In Slovakia many uneducated people don't work, because it wouldn't make economical sense for them. Their wage, minus traveling expenses, would be only a little more, in some cases even less than their welfare. What's the point of spending 8 hours in work if in result you have less money? They cannot get higher wages, because they are uneducated and unskilled; and in Slovakia even educated people get relatively little money. The welfare cannot be lowered, because the voters on the left would not allow it. The social pressure stops working if too many people in the same town are doing this; they provide moral support for each other. We have villages where unemployment is over 80% and people have already accommodated to this; after a decade of such life, even if you offer them a work with a decent wage, they will not take it, because it would mean walking away from their social circle.

This would not happen in a sane society, but it does happen in the real life. Other European countries seem to fare better in this aspect, but I can imagine the same thing happening there in a generation or two. A generation ago most people would probably not predict this situation in Slovakia.

I also can't imagine the social pressure to work on the "generation Facebook". If someone spends most of their day on Facebook or playing online multiplayer games, who exactly is going to socially press them? Their friends? Most of them live the same way. Their parents? The conflict between generations is not the same thing as peer pressure. And the "money without work is a basic human right" meme also does not help.

It could work in a country where the difference between average wage (even for a less educated and less skilled people) is much more than one needs to survive. But it can work long-term only if the amount of "basic human right money" does not grow faster than the average wage. -- OK, finally here is something that can be measured: what is the relative increase in wages vs welfares in western European countries in recent decades; optionally, extrapolate these numbers to estimate how long the system can survive.

Replies from: jbeshir
comment by jbeshir · 2012-09-07T21:13:33.563Z · LW(p) · GW(p)

This is interesting, particularly the idea of comparing wage growth against welfare growth predicting success of "free money" welfare. I agree that it seems reasonably unlikely that a welfare system paying more than typical wages, without restrictions conflicting with the "detached from work" principle, would be sustainable, and identifying unsustainable trends in such systems seems like an interesting way to recognise where something is going to have to change, long-term.

I appreciate the clarification; it provides what I was missing in terms of evidence or reasoned probability estimates over narrative/untested model. I'm taking a hint from feedback that I likely still communicated this poorly, and will revise my approach in future.

Back on the topic of taking these ideas as principles, perhaps more practical near-term goals which provide a subset of the guarantee, like detaching availability of resources basic survival from the availability of work, might be more probably achievable. There are a wider range of options available for implementing these ideas, and of incentives/disincentives to avoid long-term use. An example which comes to mind is providing users with credit usable only to order basic supplies and basic food. My rough estimate is that it seems likely that something in this space could be designed to operate sustainably with only the technology we have now.

On the side, relating to generation Facebook, my model of the typical 16-22 year old today would predict that they'd like to be able to buy an iPad, go to movies, afford alcohol, drive a nice car, go on holidays, and eventually get most of the same goals previous generations sought, and that their friends will also want these things. At younger ages, I agree that parental pressure wouldn't be typically classified as "peer pressure", but I still think it likely to provide significant incentive to do school work; the parents can punish them by taking away their toys if they don't, as effectively as for earlier generations. My model is only based on my personal experience, so mostly this is an example of anecdotal data leading to different untested models.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-08T19:01:12.371Z · LW(p) · GW(p)

An example which comes to mind is providing users with credit usable only to order basic supplies and basic food.

I have heard this idea proposed, and many people object against it saying that it would take away the dignity of those people. In other words, some people seem to think that "basic human rights" include not just things necessary for survival, but also some luxury and perhaps some status items (which then obviously stop being status items, if everyone has them).

parents can punish them by taking away their toys if they don't, as effectively as for earlier generations.

In theory, yes. However, as a former teacher I have seen parents completely fail at this.

Data point: A mother came to school and asked me to tell her 16 year old daughter, my student, to not spend all her free time at internet. I did not understand WTF she wanted. She explained to me that as a computer science teacher her daughter will probably regard me an authority about computers, so if I ask her to not use the computer all day long, she wil respect me. This was her last hope, because as a mother she could not convince her daughter to go away from the computer.

To me this seemed completely insane. First, the teachers in given school were never treated as authorities on anything; they were usually treated like shit both by students and school administration (a month later I left that school). Second, as a teacher I have zero influence on what my students do outside school, she as a mother is there; she has many possible ways to stop her daughter from interneting... for instance to forcibly turn off the computer, or just hide the computer somewhere while her daughter is at school. But she should have started doing something before her daughter turned 16. If she does not know that, she is clearly unqualified to have children; but there is no law against that.

OK, this was an extreme example, but during my 4-years teaching carreer I have seen or heard from colleagues about many really fucked up parents; and those people were middle and higher social class. This leads me to very pesimistic views, not shared by people who don't have the same experience and are more free to rationalize this away. I think that if you need parents to do something non-trivial, you should expect at least 20% of population to fail at that. Let's suppose that in each generation 20% of parents fail to pressure their children to work, in a world where work is not necessary for decent living. What happens in 5 generations?

My model is only based on my personal experience

Personal experience can give different amounts of evidence. If the experience is "me, my family, and people I willingly associate with", that one is most biased. When you have some kind of job where you interact with people you didn't choose, that one is a bit better -- it is still biased by your personal evaluation, geography, perhaps social class... but at least you are more exposed to people you would otherwise avoid. (For example I usually avoid impolite people from dysfunctional families. As a teacher, they are just there and I have to deal with them. Then I notice that they exist and are actually pretty frequent. When I go outside of the school, they again disappear somehow.)

Of course, I would still prefer having a statistic; I am just not sure if there are available statistics on how many parents completely fail at different tasks, such as not letting their children spend all free time with a computer.

comment by Eugine_Nier · 2012-09-07T05:26:53.223Z · LW(p) · GW(p)

like abolition, universal suffrage and eight-hour workdays.

One of these things is not like the others.

Replies from: DanArmak, Jayson_Virissimo
comment by DanArmak · 2012-09-07T18:18:07.295Z · LW(p) · GW(p)

One of these things is not like the others.

Yes, no state has ever implemented truly universal suffrage (among minors).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-07T19:15:45.206Z · LW(p) · GW(p)

Or non-humans.

Replies from: DanArmak
comment by DanArmak · 2012-09-07T21:18:25.606Z · LW(p) · GW(p)

That's more technically problematic; how could non-human animals vote in the existing kinds of elections? Human intermediaries would have to decide what was best for the non-humans they represented. Different human political factions would support different positions as being best for the non-humans, and fight over that.

(This of course doesn't apply to future possible non-human sentients like AI, uploads, uplifted animals, modified humans, etc.)

Replies from: Eugine_Nier, TheOtherDave
comment by Eugine_Nier · 2012-09-08T05:08:21.619Z · LW(p) · GW(p)

That's more technically problematic; how could non-human animals vote in the existing kinds of elections?

Lead them into the voting booth, see which lever they press.

comment by TheOtherDave · 2012-09-07T21:38:02.791Z · LW(p) · GW(p)

The same is true of some minors. (Though, of course, not all.)

comment by Jayson_Virissimo · 2012-09-07T06:38:08.052Z · LW(p) · GW(p)

One of these things is not like the others.

In Jasay's terminology, the first is a liberty (a relation between a person and an act) and the rest are rights {relations between two or more persons (at least one rightholder and one obligor) and an act}. I find this distnction useful for thinking more clearly about these kinds of topics. Your mileage may vary.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-07T08:05:35.360Z · LW(p) · GW(p)

I was actually referring to the the third being what I might call an anti-liberty, i.e., you aren't allowed to work more than eight-hours a day, and the fact that is most definitely not enforced nor widely considered a human right.

Replies from: DanArmak, ArisKatsaris, army1987
comment by DanArmak · 2012-09-07T18:25:02.682Z · LW(p) · GW(p)

How is that different from pointing out that you're not allowed to sell yourself into slavery (not even partially, as in signing a contract to work for ten years and not being able to legally break it), or that you're not allowed to sell your vote?

comment by ArisKatsaris · 2012-09-07T09:04:20.860Z · LW(p) · GW(p)

I'd say each of the three can be said to be unlike the others:

  • abolition falls under Liberty
  • universal suffrage falls under Equality
  • eight-hour workdays falls under Solidarity
Replies from: arundelo
comment by A1987dM (army1987) · 2012-09-07T08:19:16.532Z · LW(p) · GW(p)

I thought eight-hours workdays were about employers not being allowed to demand that employees work more than eight hours a day; I didn't know you weren't technically allowed to do that at all even if you're OK with it.

Replies from: fortyeridania, Vaniver, Slackson
comment by fortyeridania · 2012-09-07T13:28:58.716Z · LW(p) · GW(p)
  1. You are allowed to work more than eight hours per day. It's just that in many industries, employers must pay you overtime if you do so.
  2. Even if employers were prohibited from using "willingness to work more than 8 hours per day" as a condition for employment, long workdays would probably soon become the norm.
  3. Thus a more feasible way to limit workdays is to constrain employees rather than employers.

To see why, assume that without any restrictions on workday length, workers supply more than 8 hours. Let's say, without loss of generality, that they supply 10. (In other words, the equilibrium quantity supplied is ten.)

If employers can't demand the equilibrium quantity, but they're still willing to pay to get it, then employees will have the incentive to supply it. In their competition for jobs (finding them and keeping them), employees will be supply labor up until the equilibrium quantity, regardless of whether the bosses demand it.

Working more looks good. Everyone knows that; you don't need your boss to tell you. So if there's competition for your spot or for a spot that you want, it would serve you well to work more.

So if your goal is to prevent ten-hour days, you'd better stop people from supplying them.

At least, this makes sense to me. But I'm no microeconomist. Perhaps we have one on LW who can state this more clearly (or who can correct any mistakes I've made).

comment by Vaniver · 2012-09-07T20:13:08.197Z · LW(p) · GW(p)

See Lochner v. New York. Within the last five years there was a French strike (riot? don't remember exactly) over a law that would limit the workweek of bakers, which would have the impact of driving small bakeries out of business, since they would need to employ (and pay benefits on) 2 bakers rather than just 1. Perhaps a French LWer remembers more details?

comment by Slackson · 2012-09-07T08:51:08.263Z · LW(p) · GW(p)

It would be very hard to distinguish when people were doing it because they wanted to, and when employers were demanding it. Maybe some employees are working that extra time, but one isn't. The one that isn't happens to be fired later on, for unrelated reasons. How do you determine that worker's unwillingness to work extra hours is not one of the reasons they were fired? Whether it is or not, that happening will likely encourage workers to go beyond the eight hours, because the last one that didn't got fired, and a relationship will be drawn whether there is one or not.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-07T22:19:41.906Z · LW(p) · GW(p)

It's not like you can fire employees on a whim: the “unrelated reasons” have to be substantial ones, and it's not clear you can find ones for any employee you want to fire. (Otherwise, you could use such a mechanism to de facto compel your employees to do pretty much anything you want.)

Also, even if you somehow did manage to de facto demand workers to work ten hours a day, if you have to pay hours beyond the eighth as overtime (with a hourly wage substantially higher than the regular one), then it's cheaper for you to hire ten people eight hours a day each than eight people ten hours a day.

Replies from: TimS, Eugine_Nier
comment by TimS · 2012-09-07T22:42:51.812Z · LW(p) · GW(p)

Under American law, you basically can fire an employee "on a whim" as long as it isn't a prohibited reason.

comment by Eugine_Nier · 2012-09-08T05:03:14.651Z · LW(p) · GW(p)

(Otherwise, you could use such a mechanism to de facto compel your employees to do pretty much anything you want.)

Only if they can't get another job.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-08T08:54:19.295Z · LW(p) · GW(p)

That assumption isn't that far-fetched. Also, the same applies to doing that to compel them to work extra time (or am I missing something?).

comment by thomblake · 2012-09-28T14:25:39.459Z · LW(p) · GW(p)

If we can anticipate what the morality of the future would be, should we try to live by it now?

If we can afford it.

Moral progress proceeds from economic progress.

Replies from: TimS, TheOtherDave, lloyd
comment by TimS · 2012-09-28T15:15:17.156Z · LW(p) · GW(p)

Morality is contextual.

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all - morality hasn't changed, only the decision that morality calls for.


Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.

Replies from: None
comment by [deleted] · 2012-09-28T15:17:01.094Z · LW(p) · GW(p)

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.

That's an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?

Replies from: MixedNuts, army1987, TimS
comment by MixedNuts · 2012-09-28T15:31:43.460Z · LW(p) · GW(p)

I think they are impossible. Morality can say "no option is right" all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.

Replies from: None
comment by [deleted] · 2012-09-28T15:39:51.234Z · LW(p) · GW(p)

I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn't, I suppose we can just pick randomly, but that doesn't mean we've therefore made the right moral decision.

Are we ever damned if we do, and damned if we don't?

Replies from: Strange7, CronoDAS
comment by Strange7 · 2012-09-30T05:24:49.918Z · LW(p) · GW(p)

When someone is in a situation like that, they lower their standard for "morally right" and try again. Functional societies avoid putting people in those situations because it's hard to raise that standard back to it's previous level.

comment by CronoDAS · 2012-09-30T01:53:15.445Z · LW(p) · GW(p)

Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.

Replies from: None
comment by [deleted] · 2012-09-30T14:57:09.266Z · LW(p) · GW(p)

Right, but choosing the lesser of two evils is simple enough. That's not the kind of dilemma I'm talking about. I'm asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.

But if you're saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.

Replies from: CronoDAS
comment by CronoDAS · 2012-10-01T06:08:44.817Z · LW(p) · GW(p)

It's hard to say, really.

Suppose we define a "moral dilemma for system X" as a situation in which, under system X, all possible actions are forbidden.

Consider the systems that say "Actions that maximize this (unbounded) utility function are permissible, all others are forbidden." Then the situation "Name a positive integer, and you get that much utility" is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn't help much if we require the utility function to be bounded; it's still vulnerable to situations like "Name a real number less than 30, and you get that much utility" because there isn't a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you're a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked "How much utility do you want" you just answer "2^32 - 1" and when asked "How much utility less than 30.5 do you want" you just answer "30".

(Ugh, that paragraph was a mess...)

Replies from: None
comment by [deleted] · 2012-10-01T15:11:47.959Z · LW(p) · GW(p)

That is an awesome example. I'm absolutely serious about stealing that from you (with your permission).

Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often.

ETA: Here's a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn't there in fact a largest number you can name? Something like Graham's number won't work (way too small) because you can always add one to it. But trans-finite numbers aren't made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say '29.999....' and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying 'nine' over and over for a long time).

Replies from: army1987, CronoDAS
comment by A1987dM (army1987) · 2012-10-01T15:39:11.491Z · LW(p) · GW(p)

But trans-finite numbers aren't made larger by adding one.

Transfinite cardinals aren't, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.

Replies from: None
comment by [deleted] · 2012-10-01T16:16:26.862Z · LW(p) · GW(p)

Good point. What do you think of Chrono's dilemma?

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-01T16:37:21.883Z · LW(p) · GW(p)

"Twenty-nine point nine nine nine nine ..." until the effort of saying "nine" again becomes less than the corresponding utility difference. ;-)

comment by CronoDAS · 2012-10-02T09:00:04.861Z · LW(p) · GW(p)

That is an awesome example. I'm absolutely serious about stealing that from you (with your permission).

Sure, be my guest.

Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often.

Honestly, I don't know. Infinities are already a problem, anyway.

comment by A1987dM (army1987) · 2012-09-29T08:02:18.670Z · LW(p) · GW(p)

My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.

Replies from: None
comment by [deleted] · 2012-09-29T12:56:10.588Z · LW(p) · GW(p)

Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn't a better choice?

Replies from: army1987, pengvado
comment by A1987dM (army1987) · 2012-09-30T00:31:03.386Z · LW(p) · GW(p)

If I know there isn't a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it's because of emotional hang-ups I'd rather not have. And replacing dollars with utilons wouldn't change much.)

Replies from: None
comment by [deleted] · 2012-09-30T14:52:31.365Z · LW(p) · GW(p)

So you're saying that there are no true moral dilemmas (no undecidable moral problems)?

Replies from: army1987, Legolan, faul_sname, Legolan
comment by A1987dM (army1987) · 2012-09-30T22:04:27.294Z · LW(p) · GW(p)

Depends on what you mean by “undecidable”. There may be situations in which it's hard in practice to decide whether it's better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn't matter.

Replies from: None
comment by [deleted] · 2012-09-30T22:44:12.711Z · LW(p) · GW(p)

Depends on what you mean by “undecidable”.

So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?

Replies from: TheOtherDave, army1987
comment by TheOtherDave · 2012-10-01T00:53:16.286Z · LW(p) · GW(p)

could we come up with examples of such dilemmas within consequentialism?

Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.

And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.

There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).

Replies from: None
comment by [deleted] · 2012-10-01T01:55:17.160Z · LW(p) · GW(p)

Thanks, that was helpful. I'd been having a hard time coming up with a consequentialist example.

comment by A1987dM (army1987) · 2012-10-01T00:02:26.391Z · LW(p) · GW(p)

So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible.

Then, either the demand/forbiddance is not absolute or the moral system is broken.

comment by Legolan · 2012-09-30T15:24:00.184Z · LW(p) · GW(p)

How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed.

We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.

ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I'm hiding in my house. So I'd have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.

I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can't tell us how to act, it's literally useless. We have to have some process for deciding on our actions.

Replies from: None
comment by [deleted] · 2012-09-30T15:52:56.011Z · LW(p) · GW(p)

How are you defining morality?

I'm not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.

If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed.

Would it? It doesn't follow from that definition that dilemmas are impossible. This:

I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se.

Is the claim I'm asking for an argument for.

Replies from: Kindly
comment by Kindly · 2012-09-30T16:43:40.753Z · LW(p) · GW(p)

I'm really confused about the point of this discussion.

The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn't. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn't, then you can do either one.

Obviously this simple answer isn't good enough for you, but why not?

Replies from: mfb, None
comment by mfb · 2012-09-30T17:37:25.997Z · LW(p) · GW(p)

The tricky task is to distinguish between those 3 cases - and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.

If you can do this, publish it.

Replies from: Kindly
comment by Kindly · 2012-09-30T18:10:25.290Z · LW(p) · GW(p)

Well, yes, finding a simple description of morality is hard. But you seem to be asking if there's a possibility that it's in principle impossible to distinguish between these 3 cases for some situation -- and this is what you call a "true moral dilemma" -- and I don't see how the idea of that is coherent.

Replies from: mfb
comment by mfb · 2012-10-03T22:40:40.494Z · LW(p) · GW(p)

I did not call anything "true moral dilemma".

Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.

comment by [deleted] · 2012-09-30T22:40:01.941Z · LW(p) · GW(p)

It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.

Replies from: Kindly
comment by Kindly · 2012-09-30T22:53:03.561Z · LW(p) · GW(p)

I admit that I hadn't thought of moral systems as forbidding options, only as ranking them, in which case that doesn't come up.

If your morality does have absolute rules like that, there isn't any reason why those rules wouldn't come in conflict. But even then, I wouldn't say "this is a true moral dilemma" so much as "the moral system is self-contradictory". Not that this is a great help to someone who does discover this about themselves.

Ideally, though, you'd only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.

Replies from: None
comment by [deleted] · 2012-09-30T23:07:21.393Z · LW(p) · GW(p)

But even then, I wouldn't say "this is a true moral dilemma" so much as "the moral system is self-contradictory".

So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it's not obvious to me that the inference from 'allows for moral conflict' to 'is a false moral theory' is valid. I don't have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).

Replies from: Kindly
comment by Kindly · 2012-09-30T23:50:16.349Z · LW(p) · GW(p)

I don't think that it means it's a false moral theory, just an incompletely defined one. In cases where it doesn't tell you what to do (or, equivalently, tells you that both options are wrong), it's useless, and a moral theory that did tell you what to do in those cases would be better.

comment by faul_sname · 2012-10-01T20:58:46.718Z · LW(p) · GW(p)

That one thing a couple years ago qualifies.

But unless you get into self-referencing moral problems, no. I can't think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb's problem, only twistier.

(Warning: this may be basilisk territory.)

comment by Legolan · 2012-09-30T15:23:36.633Z · LW(p) · GW(p)

(Double-post, sorry)

comment by pengvado · 2012-09-29T19:46:35.635Z · LW(p) · GW(p)

There are plenty of situations where two choices are equally good or equally bad. This is called "indifference", not "dilemma".

Replies from: None
comment by [deleted] · 2012-09-29T22:01:06.656Z · LW(p) · GW(p)

There are plenty of situations where two choices are equally good or equally bad.

Those aren't the situations I'm talking about.

comment by TimS · 2012-09-28T15:33:38.326Z · LW(p) · GW(p)

I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.

In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system.

Which makes figuring out what we mean by moral change / moral progress incredibly difficult.

Replies from: None
comment by [deleted] · 2012-09-28T15:45:22.528Z · LW(p) · GW(p)

In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system.

This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like 'no true moral theory can allow moral conflicts'. But it's not yet an argument for this claim.

Replies from: TimS
comment by TimS · 2012-09-28T18:14:50.664Z · LW(p) · GW(p)

I'm suddenly concerned that we're arguing over a definition. It's very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label "moral system" for such a decision procedure?

This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality."

Replies from: None
comment by [deleted] · 2012-09-28T18:23:01.703Z · LW(p) · GW(p)

Is it clearer to avoid using the label "moral system" for such a decision procedure?

No, but if I understand what you've said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them 'moral questions' leads me to think you think that these questions are moral ones even if a true moral theory can't decide them).

This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality."

You're certainly right, this isn't relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.

This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.

Replies from: TimS
comment by TimS · 2012-09-28T18:30:55.467Z · LW(p) · GW(p)

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.

Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label "morality." The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think "incomplete" moral systems can exist.

But beyond that, I should bow out, because I'm an anti-realist and this debate is between schools of moral realists.

comment by TheOtherDave · 2012-09-28T15:25:51.911Z · LW(p) · GW(p)

Rephrasing the original question: if we can anticipate the guiding principles underlying the morality of the future, ought we apply those principles to our current circumstances to make decisions, supposing they are different?

Though you seem to be implicitly assuming that the guiding principles don't change, merely the decisions, and those changed decisions are due to the closest implementable approximation of our guiding principles varying over time based on economic change. (Did I understand that right?)

Replies from: thomblake
comment by thomblake · 2012-09-28T15:39:43.890Z · LW(p) · GW(p)

Pretty much. Though it feels totally different from the inside. Athens could not have thrived without slave labor, and so you find folks arguing that slavery is moral, not just necessary. Since you can't say "Action A is immoral but economically necessary, so we shall A" you instead say "Action A is moral, here are some great arguments to that effect!"

And when we have enough money, we can even invent new things to be upset about, like vegetable rights.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-28T15:57:22.970Z · LW(p) · GW(p)

(nods) Got it.

On your view, is there any attempt at internal coherence?

For example, given an X such that X is equally practical (economically) in an Athenian and post-Athenian economy, and where both Athenians and moderns would agree that X is more "consistent with" slavery than non-slavery, would you expect Athenians to endorse X and moderns to reject it, or would you expect other (non-economic) factors, perhaps random noise, to predominate? (Or some third option?)

Or is such an X incoherent in the first place?

Replies from: TimS
comment by TimS · 2012-09-28T18:17:40.138Z · LW(p) · GW(p)

Can you give a more concrete example? I don't understand your question.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-28T18:40:05.152Z · LW(p) · GW(p)

I can't think of a concrete example that doesn't introduce derailing specifics.
Let me try a different question that gets at something similar: do you think that all choices a society makes that it describes as "moral" are economic choices in the sense you describe here, or just that some of them are?

Edit: whoops! got TimS and thomblake confused. Um. Unfortunately, that changes nothing of consequence: I still can't think of a concrete example that doesn't derail. But my followup question is not actually directed to Tim. Or, rather, ought not have been.

Replies from: thomblake
comment by thomblake · 2012-10-01T16:26:21.332Z · LW(p) · GW(p)

Probably a good counterexample would be the right for certain groups to work any job they're qualified for, for example women or people with disabilities. Generally, those changes were profitable and would have been at any time society accepted it.

Replies from: TimS, TheOtherDave
comment by TimS · 2012-10-01T18:06:13.222Z · LW(p) · GW(p)

I don't understand the position you are arguing and I really want to. Either illusion of transparency or I'm an idiot. And TheOtherDave appears to understand you. :(

Replies from: thomblake
comment by thomblake · 2012-10-01T18:31:57.353Z · LW(p) · GW(p)

I'm not really arguing for a position - the grandparent was a counterexample to the general principle I had proposed upthread, since the change was both good and an immediate economic benefit, and it took a very long time to be adopted.

comment by TheOtherDave · 2012-10-01T17:55:47.790Z · LW(p) · GW(p)

(nods) Yup, that's one example I was considering, but discarded as too potentially noisy.

But, OK, now that we're here... if we can agree for the sake of comity that giving women the civil right to work any job would have been economically practical for Athenians, and that they nevertheless didn't do so, presumably due to some other non-economic factors... I guess my question is, would you find it inconsistent, in that case, to find Athenians arguing that doing so would be immoral?

Replies from: thomblake
comment by thomblake · 2012-10-01T18:33:01.379Z · LW(p) · GW(p)

I don't think so. I'm pretty sure lots of things can stand in the way of moral progress.

comment by lloyd · 2012-09-30T02:16:03.012Z · LW(p) · GW(p)

Moral progress proceeds from economic progress.<

What is progress with respect to either? Could you possibly mean that moral states - the moral conditions of a society - follow from the economic state - the condition and system of economy. I do find it hard to see a clear, unbiased definition of moral or economic progress.

Replies from: thomblake, chaosmosis
comment by thomblake · 2012-10-01T16:23:16.893Z · LW(p) · GW(p)

Moral progress is a trend or change for the better in the morality of members of a society. For example, when the United States went from widespread acceptance of slavery to widespread rejection of slavery, that was moral progress on most views of morality.

Economic progress is a trend or change that results in increased wealth for a society.

In general, widespread acceptance of a moral principle, like our views on slavery, animal rights, vegetable rights, and universal minimum income, only comes after we can afford it.

comment by chaosmosis · 2012-09-30T02:25:16.869Z · LW(p) · GW(p)

I think he's trying to say that having resources is a prerequisite to spending them on moral things like universal pay, so we need to pursue wealth if we want to pursue morality. Technically, economic progress is more of a prerequisite to moral progress than a sufficient cause though, as economic progress can also result in bad moral outcomes depending on what we do with our wealth.

Replies from: lloyd
comment by lloyd · 2012-09-30T03:21:48.739Z · LW(p) · GW(p)

What is moral progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from a more egalitarian tribal state? Is the progress of the empire to a point of collapse and the start of some new empire considered moral progress?

What is economic progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from the primitive hunter-gatherer society where everyone had more free time considered economic progress? Is the progress of the empire to a point where the disparity in wealth incites revolution or causes collapse considered economic progress?

Replies from: chaosmosis
comment by chaosmosis · 2012-09-30T04:44:03.314Z · LW(p) · GW(p)
  1. You're not making arguments.
  2. The points you raise are not responsive to the points that either he or I made.
  3. If it increases total aggregate utility. Tribes were small, there weren't very many people. I'm also not sure how happy most tribes were. Additionally, bad moral societies might be necessary to transition to awesome ones.
  4. You conflate moral and economic progress in your second paragraph.
  5. A financial system which collapses probably isn't too healthy. It still might have improved things overall through its pre-collapse operations though.

Universal pay does not even seem possible now.

Replies from: lloyd
comment by lloyd · 2012-09-30T16:02:17.409Z · LW(p) · GW(p)

You do not answer the question and conflate the questions

How is economic progress measured - if you say the aggraegate utility please explain how that is measured.?

How is moral progress measured?

My argument is simple - the measure of either of these is based on poor heuristics.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-30T22:49:47.278Z · LW(p) · GW(p)

My first reaction is to want to say that economic progress is an increase in purchasing power. However, purchasing power is measured with reference to the utility of goods. That would be fine as a solution, except that those definitions would mean that it would be literally impossible for an increase in economic progress to be bad on utilitarian grounds. That's not what "economic progress" is generally taken to mean, so I won't use that definition.

Instead, I'll say that economic progress is an increase in the ability to produce goods, whether those goods are good or bad. This increase can be either numerical or qualitative, I don't care. Now, it might not be possible to quantify this precisely, but that's not necessary to determine that economic progress occurs. Clearly, we are now farther economically progressed than we were in the Dark Ages.

Moral progress would be measured depending on the moral theory you're utilizing. I would use a broad sort of egoism, personally, but most people here would use utilitarianism.

With an egoist framework, you could keep track of how happy or sad you were directly. You could also measure the prevalence of factors that tend to make you happy and then subtract the prevalence of factors that tend to make you sad (while weighting for relative amounts of happiness and sadness, of course), in order to get a more objective account of your own happiness.

With a utilitarian framework, you would measure the prevalence of things that tend to make all people happy, and then subtract the prevalence of things that tend to make all people sad. If there was an increase in the number of happy people, then that would mean moral progress in the eyes of a utilitarian.

You make no argument. You merely ask a question. If you have a general counterargument or want to refute the specifics of any of my points, feel free. So far, you haven't done anything like that. Also, although it might not be possible to quantify economic or moral progress precisely, we can probably do it well enough for most practical purposes. I don't understand the purpose of the points you're trying to raise here.

Replies from: lloyd
comment by lloyd · 2012-10-01T04:24:06.377Z · LW(p) · GW(p)

My original post refuted the statement:

Moral progress proceeds from economic progress.

You interjected:

I think he's trying to say .... we need to pursue wealth if we want to pursue morality. .... economic progress can also result in bad moral outcomes depending on what we do with our wealth.

You do not like the questions, the Socratic? Ok, I asserted the basis of the argument and the point of the questions:

A clear, unbiased definition of moral or economic progress does not exist.

You present models for deciding both. There exists models where economic progress varies inversely with moral progress, such as possible outcomes from the utilitarian perspective that are covered in ethics 101 at most colleges, and the manifest reality of a system where economic progress has been used for justifying an abundance of atrocities. There also exist models in either category which define progress in entirely different directions and so any statement of progress is inherently biased.

There is a link between economic states/systems and moral conditions, and it appeared that the author of the statement: "Moral progress proceeds from economic progress." may have been oversimplifying the issue to a point of of making it unintelligible.

You mentioned wealth which implies an inherent bias also. I can personally assert a different version of wealth which excludes much of what most people consider wealth. If most people think wealth includes assets like cash or gold which I see as having an immoral nature and so their idea of accumulating wealth is immoral in my pov. (I do not include a lengthy moral case, but rather assert such a case exists). So if you see progress and wealth as interrelated then I would ask for a definition of wealth?

You also assert that economic progress is an increased ability to produce goods. I assert that there are many modes of production of which the current industrial mode finds value in quantity, as you state is the measure. Two biases arise:

1 - The bias inherent to the mode: quantity is not the only measure of progress. Competing values include quality in aesthetics, ergonomics, environmental impact, functionality, modular in use (consider open source values). I do not think having more stuff is a sign of economic progress and I am not alone in finding that the measure you have asserted says nothing of "progress" - you of course argue differently and thus we can say one measure or another of progress may differ and are thus inherently biased.

2- What mode of production is more progressed? I do not think industrialization is progress. I see many flaws in the results. Too much damage from that mode imho. I am not here to argue that position but rather to assert it exists.

Is my point about the bias inherent in describing progress clear, or do you think that there exists some definition we all agree upon as to what progress in any area is?

Replies from: chaosmosis
comment by chaosmosis · 2012-10-01T04:36:55.961Z · LW(p) · GW(p)

You say that economic production and moral progress aren't the same. I have already said the same thing; I have already said that increased economic production might lead to morally wrong outcomes depending on how those products end up being used.

You can assert a different definition of wealth if you want, sure. I don't understand what argument this is supposed to be responsive to. There's a common understanding of wealth and just because different people define wealth differently, that wouldn't invalidate my point. Having resources is key to investing them, investing resources is key to doing moral things.

You say that quantity isn't the sole realm of value. I think that's true. But if you take the quantity of goods and multiply them by the quality of goods (that is, the utility of the goods, like I mentioned before) then that is a sufficient definition of total economic value.

The mode of production that is most progressed is the one which produces the most.

comment by DanielLC · 2012-09-11T02:56:10.548Z · LW(p) · GW(p)

If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago. I'm not sure we could have even supported living. The same applies to full unemployment. We may someday reach a point where we are productive enough that we can accomplish all we need when we just do it for fun, but if we try that now, we'll all starve.

Replies from: CronoDAS, TheOtherDave
comment by CronoDAS · 2012-09-11T03:40:02.588Z · LW(p) · GW(p)

If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago.

Is that true? (Technically, a century ago was 1912.)

Wikipedia on the eight-hour day:

On January 5, 1914, the Ford Motor Company took the radical step of doubling pay to $5 a day and cut shifts from nine hours to eight, moves that were not popular with rival companies, although seeing the increase in Ford's productivity, and a significant increase in profit margin (from $30 million to $60 million in two years), most soon followed suit.

Replies from: DanielLC
comment by DanielLC · 2012-09-11T04:07:51.819Z · LW(p) · GW(p)

The quote seemed to imply we didn't have them a century ago. Just use two centuries or however long.

My point is that we didn't stop working as long because we realized it was a good idea. We did because it became a good idea. What we consider normal now is something we could not have instituted a century ago, and attempting to institute now what what will be normal a century from now would be a bad idea.

comment by TheOtherDave · 2012-09-11T03:54:04.639Z · LW(p) · GW(p)

So, accepting the premise that the ability to support "full unemployment" (aka, people working for reasons other than money) is something that increases over time, and it can't be supported until the point is reached where it can be supported... how would we recognize when that point has been reached?

comment by taelor · 2012-09-12T10:09:16.872Z · LW(p) · GW(p)

If we can anticipate what the morality of the future would be, should we try to live by it now?

The question is, can we? Does anyone happen to have any empirical data about how good, for example, Greco-Romans were at predicting the moral views of the Middle Ages?

Additionally, is merely sounding "like the kind of lunatic notion that’ll be considered a basic human right in about a century" really a strong enough justification for us to radically alter our political and economic systems? If I had to guess, I'd predict that Kreider already believes divorcing income from work to be a good idea, for reasons that may or may not be rational, and is merely appealing to futurism to justify his bottom line.

comment by Eugine_Nier · 2012-09-07T05:18:04.808Z · LW(p) · GW(p)

If we can anticipate what the morality of the future would be, should we try to live by it now?

Are you sure you can. It's remarkably easy to make retroactive "predictions", much harder to make actual predictions.

comment by Thomas · 2012-09-07T06:12:15.502Z · LW(p) · GW(p)

The way to divorce work from income is the ownership. Be an owner!

Replies from: roystgnr
comment by roystgnr · 2012-09-07T16:38:28.350Z · LW(p) · GW(p)

One way to divorce work from income is to own stuff.

A more popular way is to find someone else who owns stuff, then take their stuff.

Replies from: wedrifid
comment by wedrifid · 2012-09-07T16:47:29.176Z · LW(p) · GW(p)

A more popular way is to find someone else who owns stuff, then take their stuff.

That counts as work.

Replies from: Oscar_Cunningham, DanArmak
comment by Oscar_Cunningham · 2012-09-07T18:43:24.886Z · LW(p) · GW(p)

Not if it's in the form of "Be poor in a country that taxes the rich to give to the poor".

comment by DanArmak · 2012-09-07T18:16:06.012Z · LW(p) · GW(p)

No, you just need to own enough stuff to pay workers to take even more stuff from others.

Replies from: wedrifid
comment by wedrifid · 2012-09-08T04:29:03.108Z · LW(p) · GW(p)

No, you just need to own enough stuff to pay workers to take even more stuff from others.

The way to divorce income from work is to pay others to do the work for you? Yes, that works.

comment by VKS · 2012-09-04T23:51:02.763Z · LW(p) · GW(p)

After I spoke at the 2005 "Mathematics and Narrative" conference in Mykonos, a suggestion was made that proofs by contradiction are the mathematician's version of irony. I'm not sure I agree with that: when we give a proof by contradiction, we make it very clear that we are discussing a counterfactual, so our words are intended to be taken at face value. But perhaps this is not necessary. Consider the following passage.

There are those who would believe that every polynomial equation with integer coefficients has a rational solution, a view that leads to some intriguing new ideas. For example, take the equation x² - 2 = 0. Let p/q be a rational solution. Then (p/q)² - 2 = 0, from which it follows that p² = 2q². The highest power of 2 that divides p² is obviously an even power, since if 2^k is the highest power of 2 that divides p, then 2^2k is the highest power of 2 that divides p². Similarly, the highest power of 2 that divides 2q² is an odd power, since it is greater by 1 than the highest power that divides q². Since p² and 2q² are equal, there must exist a positive integer that is both even and odd. Integers with this remarkable property are quite unlike the integers we are familiar with: as such, they are surely worthy of further study.

I find that it conveys the irrationality of √2 rather forcefully. But could mathematicians afford to use this literary device? How would a reader be able to tell the difference in intent between what I have just written and the following superficially similar passage?

There are those who would believe that every polynomial equation has a solution, a view that leads to some intriguing new ideas. For example, take the equation x² + 1 = 0. Let i be a solution of this equation. Then i² + 1 = 0, from which it follows that i² = -1. We know that i cannot be positive, since then i² would be positive. Similarly, i cannot be negative, since i² would again be positive (because the product of two negative numbers is always positive). And i cannot be 0, since 0² = 0. It follows that we have found a number that is not positive, not negative, and not zero. Numbers with this remarkable property are quite unlike the numbers we are familiar with: as such, they are surely worthy of further study.

  • Timothy Gowers, Vividness in Mathematics and Narrative, in Circles Disturbed: The Interplay of Mathematics and Narrative
Replies from: DanArmak, CCC
comment by DanArmak · 2012-09-06T22:01:40.151Z · LW(p) · GW(p)

The two examples are not contradictory, but analogous to one another. The correct conclusion in both is the same, and both are equally serious or ironic.

  1. Suppose x² -2=0 has a solution that is rational. That leads to a contradiction. So any solution must be irrational.

  2. Suppose x² +1=0 has a solution that is a number. That leads to a contradiction. So any solution must not be a number. Now what is a "number" in this context? From the text, something that is either positive, negative, or zero; i.e. something with a total ordering. And indeed we know (ETA: this is wrong, see below) that such solutions, the complex numbers, have no total ordering.

I see no relevant difference between the two cases.

Replies from: The_Duck, CCC, IlyaShpitser, DanielLC, gwern
comment by The_Duck · 2012-09-06T22:25:29.003Z · LW(p) · GW(p)

You can work the language a little to make them analogous, but that's not the point Gowers is making. Consider this instead:

"There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study."

I imagine Gowers's point to be that sometimes a contradiction does point to a way in which you can revise your assumptions to gain access to "intriguing new ideas", but sometimes it just indicates that your assumptions are wrong.

Replies from: CronoDAS, DanArmak
comment by CronoDAS · 2012-09-06T22:49:11.083Z · LW(p) · GW(p)

"There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study."

Yes, yes they are.

comment by DanArmak · 2012-09-06T22:53:51.429Z · LW(p) · GW(p)

Consider the equation x + 1 = x.

(Edited again: this example is wrong, and thanks to Kindly for pointing out why. CronoDAS gives a much better answer.)

Curiously enough, the Peano axioms don't seem to say that S(n)!=n. Lo, a finite model of Peano:

X = {0, 1} Where: 0+0=0; 0+1=1+0=1+1=1 And the usual equality operation.

In this model, x+1=1 has a solution, namely x=1. Not a very interesting model, but it serves to illustrate my point below.

sometimes a contradiction does point to a way in which you can revise your assumptions to gain access to "intriguing new ideas", but sometimes it just indicates that your assumptions are wrong.

Contradiction in conclusions always indicates a contradiction in assumptions. And you can always use different assumptions to get different, and perhaps non contradictory, conclusions. The usefulness and interest of this varies, of course. But proof by contradiction remains valid even if it gives you an idea about other interesting assumptions you could explore.

And that's why I feel it's confusing and counterproductive to use ironic language in one example, and serious proof by contradiction in another, completely analogous example, to indicate that in one case you just said "meh, a contradiction, I was wrong" while in the other you invented a cool new theory with new assumptions. The essence of math is formal language and it doesn't mix well with irony, the best of which is the kind that not all readers notice.

Replies from: VKS, Kindly
comment by VKS · 2012-09-07T05:05:09.393Z · LW(p) · GW(p)

But that's the entire point of the quote! That mathematicians cannot afford the use of irony!

Replies from: DanArmak
comment by DanArmak · 2012-09-07T08:54:39.256Z · LW(p) · GW(p)

Yes. My goal wasn't to argue with the quote but to improve its argument. The quote said:

But could mathematicians afford to use this literary device? How would a reader be able to tell the difference in intent between what I have just written and the following superficially similar passage?

And I said, it's not just superficially similar, it's exactly the same and there's no relevant difference between the two that would guide us to use irony in one case and not in the other (or as readers, to perceive irony in one case and serious proof by contradiction in the other).

comment by Kindly · 2012-09-06T23:24:02.804Z · LW(p) · GW(p)

Your model violates the property that if S(m) = S(n), then m=n, because S(1) = S(0) yet 1 != 0. You might try to patch this by changing the model so it only has 0 as an element, but there is a further axiom that says that 0 is not the successor of any number.

Together, the two axioms used above can be used to show that the natural numbers 0, S(0), S(S(0)), etc. are all distinct. The axiom of induction can be used to show that these are all the natural numbers, so that we can't have some extra "floating" integer x such that S(x) = x.

Replies from: DanArmak
comment by DanArmak · 2012-09-07T08:47:01.877Z · LW(p) · GW(p)

Right. Thanks.

comment by CCC · 2012-09-07T07:54:57.350Z · LW(p) · GW(p)

The only relevant difference that I can see is that, in the first paragraph, the solutions are explicitly limited to the rational numbers; in the second case, the solutions are not explicitly limited to the reals.

comment by IlyaShpitser · 2012-09-14T07:30:35.114Z · LW(p) · GW(p)

There are lots of total orderings on the complex numbers. For example:

a + bi [>] c + di iff a >= c or (a = c and b >= d).

In fact, if you believe the axiom of choice there are "nice total orders" for any set at all.

Replies from: Kindly, Spinning_Sandwich, DanArmak
comment by Kindly · 2012-09-14T13:36:49.772Z · LW(p) · GW(p)

Importantly, however, the complex numbers have no total ordering that respects addition and multiplication. In other words, there's no large set of "positive complex numbers" closed under both operations.

This is also the reason why the math in this XKCD strip doesn't actually work.

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-14T23:05:16.406Z · LW(p) · GW(p)

You can still find divisors for Gaussian integers. If x, y, and xy are all Gaussian integers, which will be trivially fulfilled for any x when y=1, then x, y both divide xy.

You can then generalize the \sigma function by summing over all the divisors of z and dividing by |z|.

The resulting number \sigma(z) lies in C (or maybe Q + iQ), not just Q, but it's perfectly well defined.

Replies from: Kindly
comment by Kindly · 2012-09-15T01:07:06.751Z · LW(p) · GW(p)

If you sum over all the divisors of z, the result is perfectly well defined; however, it's 0. Whenever x divides z, so does -x.

Over the integers, this is solved by summing over all positive divisors. However, there's no canonical choice of what divisors to consider positive in the case of Gaussian integers, and making various arbitrary choices (like summing over all divisors in the upper half-plane) leads to unsatisfying results.

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-16T03:24:48.316Z · LW(p) · GW(p)

That's like saying the standard choice of branch cut for the complex logarithm is arbitrary.

And?

When you complexify, things get messier. My point is that making a generalization is possible (though it's probably best to sum over integers with 0 \leq arg(z) < \pi, as you pointed out), which is the only claim I'm interested in disputing. Whether it's nice to look at is irrelevant to whether it's functional enough to be punnable.

Replies from: Kindly
comment by Kindly · 2012-09-16T04:21:00.956Z · LW(p) · GW(p)

You're right -- the generalization works.

Mainly what I don't like about it is that \sigma(z) no longer has the nice properties it had over the integers: for example, it's no longer multiplicative. This doesn't stop Gaussian integers from being friendly, though, and the rest is a matter of aesthetics.

comment by Spinning_Sandwich · 2012-09-14T10:53:57.836Z · LW(p) · GW(p)

The well-ordering principle doesn't really have any effect on canonical orderings, like that induced by the traditional less-than relation on the real numbers.

This doesn't affect the truth of your claim, but I do think that DanArmak's point was quite separate from the language he chose. He might instead have worded it as having no real solution, so that any solution must be not-real.

comment by DanArmak · 2012-09-14T09:30:57.385Z · LW(p) · GW(p)

Gah. You're quite right. I should refrain from making rash mathematical statements. Thank you.

comment by DanielLC · 2012-09-11T02:37:45.582Z · LW(p) · GW(p)

The first one shows that assuming that there's a rational solution leads to contradiction, then drops the subject. The second one shows that assuming that there's a real solution leads to a contradiction, then suggests to investigate the non-reals. How are you supposed to tell which drops the subject and which suggests investigation?

comment by gwern · 2012-09-07T16:40:08.755Z · LW(p) · GW(p)

I see no relevant difference between the two cases.

Isn't that the entire point? I see this as a mathematical version of the modus tollens/ponens point made elsewhere in this page.

Replies from: DanArmak
comment by DanArmak · 2012-09-07T18:05:00.960Z · LW(p) · GW(p)

The quote says,

could mathematicians afford to use this literary device? How would a reader be able to tell the difference in intent between what I have just written and the following superficially similar passage?

This seems to me to mean: the two cases are different; the first is appropriately handled by serious proof-by-contradiction, while the second is appropriately handled by irony. But readers may not be able to tell the difference, because the two texts are similar and irony is hard to identify reliably. So mathematicians should not use irony.

Whereas I would say: the two cases are the same, and irony or seriousness are equally appropriate to both. If readers could reliably identify irony, they would correctly deduce that the author treated the two cases differently, which is in fact a wrong approach. So readers are better served by treating both texts as serious.

I'm not saying mathematicians should / can effectively use irony; I'm saying the example is flawed so that it doesn't demonstrate the problems with irony.

Replies from: gwern, Kindly
comment by gwern · 2012-09-07T21:55:23.975Z · LW(p) · GW(p)

The difference is that mathematicians apply modus tollens and reject sqrt2 being rational, but apply modus ponens and accept the existence of i; why? Because apparently the resultant extensions of theories justify this choice - and this is the irony, the reason one's beliefs are in discordance with one's words/proof and the reader is expected to appreciate this discrepancy.

But what one regards as a useful enough extension to justify a modus tollens move is something other may not appreciate or will differ from field to field, and this is a barrier to understanding.

Replies from: DanArmak, DanArmak
comment by DanArmak · 2012-09-07T22:57:56.861Z · LW(p) · GW(p)

I hadn't considered that irony. I was thinking about the explicit irony of the text itself in its proof of sqrt(2) being irrational. The reader is expected to know the punchline, that sqrt(2) is irrational but that irrational numbers are important and useful. So the text that (ironically) appears to dismiss the concept of irrational numbers is in fact wrong in its dismissal, and that is a meta-irony.

...I feel confused by the meta levels of irony. Which strengthens my belief that mathematical proofs should not be ironical if undertaken seriously.

Replies from: gwern
comment by gwern · 2012-09-07T23:03:36.836Z · LW(p) · GW(p)

Yes, I feel similarly about this modus stuff; it seems simple and trivial, but the applications become increasingly subtle and challenging, especially when people aren't being explicit about the exact reasoning.

comment by DanArmak · 2012-09-07T22:51:53.407Z · LW(p) · GW(p)

If mathematicians behaved simply as you describe, then those resultant extension theories would never have been developed, because everyone would have applied modus tollens regarding in a not-yet-proven-useful case. (Disclaimer: I know nothing about the actual historical reasons for the first explorations of complex numbers.)

Therefore, it's best for mathematicians to always keep the M-T and M-P cases in mind when using a proof by contradiction. Of course, a lot of time the contradiction arises due to theorems already proven from axioms, and what happens if any one of the axioms in a theory is removed is usually well explored.

comment by Kindly · 2012-09-07T22:48:26.047Z · LW(p) · GW(p)

You're drawing the parallel differently from the quote's author. The second example requires assuming the existence of complex numbers to resolve the contradiction. The first example requires assuming, not the existence of irrational numbers (we already know about those, or we wouldn't be asking the question!), but the existence of integers which are both even and odd. As far as I know, there are no completely satisfactory ways of resolving the latter situation.

comment by CCC · 2012-09-05T06:55:11.078Z · LW(p) · GW(p)

In the first case, starting with p such that the highest power of 2 that divides p is an integer power of 2 (2^k for some integer k); then the highest power of 2 that divides p² is 2^2k; then the highest power of 2 that divides 2q² is also 2^2k; then the highest power of 2 that divides q is 2^(2k-1); therefore q must be a multiple of 2^(k-0.5); a noninteger power of 2.

This implies that there is a number 2^(0.5). It makes no claims as to whether or not this number is rational, or integer; it merely claims that such a number must exist. (Consider: if I had started instead with the equation x²-4=0, I would have ended up showing that a number of the form 4^(0.5) must exist - that number is rational, is indeed an integer).

Now, I think I can prove that an integer q which is a multiple of 2^(k-0.5) but which is not a multiple of 2^k, for integer k, does not exist; but I can only complete that proof by knowing in advance that 2^0.5 is irrational, so I can't use it to prove the irrationality of 2^0.5. I can easily prove that a rational number of the form 4^(k-0.5) for integer k does exist; indeed, an infinite number of such numbers exist (examples include 2, 8, 32).

No matter how forcefully that first passage conveys the irrationality of √2, it does not prove it.

Replies from: VKS
comment by VKS · 2012-09-05T07:28:16.554Z · LW(p) · GW(p)

The paragraph, of course, was talking about integer powers of 2 that divide p. As in, the largest number 2^k such that 2^k divides p and k is an integer.

The largest real power of 2 that divides p is, of course, p itself, as 2^log_2(p) = p.

Replies from: CCC
comment by CCC · 2012-09-06T07:53:26.937Z · LW(p) · GW(p)

Looking over my post again, after a good night's sleep, I see that it wasn't as coherent as it appeared to me yesterday. Let me see if I can put my point a little more clearly.

The paragraph centers its claim of the irrationality of √2 on the idea that p² contains exactly twice as many powers of 2 as p does. But that is only true because √2 is irrational, making the demonstration a circular proof.

Consider. If √2 were rational, in the form of z/y for some coprime integers z and y, then it would be easy to find an integer that is not itself an integer power of 2, but whose square is an integer power of 2; z would be such a number.

Replies from: Kindly, None
comment by Kindly · 2012-09-06T12:22:23.167Z · LW(p) · GW(p)

The proof assumes unique prime factorization. Once we factor p as 2^a 3^b 5^c etc., then we know that p^2 factors as 2^(2a) 3^(2b) 5^(2c) etc. This is (implicitly) how we know that p^2 contains exactly twice as many powers of 2 as p.

Replies from: CCC
comment by CCC · 2012-09-06T14:47:44.869Z · LW(p) · GW(p)

If √2 is rational, then √2 can be written as z/y for some integers z and y, where z and y are coprime. Then, 2=z²/y².

Consider the hypothetical integer z. It is equal to √2*y. Since y and z are coprime, y cannot contain a factor of √2. Thus, z does not contain a factor of 2; the highest integer power of 2 that is a factor of z is 2^0.

On the other hand, z² does have a factor of 2; it is equal to 2*y² (since y has no factor of √2, y² therefore has no factor of 2).

Therefore, to claim that p² contains exactly twice as many powers of 2 as p is exactly equivalent to claiming that √2 is irrational.

Replies from: Kindly, Document
comment by Kindly · 2012-09-06T15:54:49.535Z · LW(p) · GW(p)

Even if √2 is rational, it is not an integer, and (especially in your second paragraph) you are trying to do things with it that only make sense with integers.

I don't think that continuing this conversation can be productive, since you seem to be objecting to standard techniques. A rigorous textbook in number theory can probably explain the proof that √2 is irrational more thoroughly, and you can follow back the lemmas and see exactly where your confusion lies.

comment by Document · 2012-09-09T18:21:59.633Z · LW(p) · GW(p)

Isn't the point of math that all mathematical truths are logically equivalent? (In beore Gödel.)

Replies from: Eugine_Nier, Kindly
comment by Eugine_Nier · 2012-09-10T02:34:24.610Z · LW(p) · GW(p)

Depends on the axioms you're using.

comment by Kindly · 2012-09-09T18:29:59.797Z · LW(p) · GW(p)

Well, there's an informal notion that if two theorems T1 and T2 are both true, yet T1 <=> T2 is much easier to prove than either T1 and T2, then the two are equivalent. (There's also the formal notion that two axioms are equivalent if assuming either one lets you prove the other, but I don't think that's especially relevant here.)

That's not close to being the most confused part of the comment you're replying to.

comment by [deleted] · 2012-09-06T13:17:16.937Z · LW(p) · GW(p)

But [p^2 contains exactly twice as many powers of 2 as p does] is only true because √2 is irrational, making the demonstration a circular proof.

Notice that you are claiming that all possible proofs of the statement "p^2 contains twice as many powers of 2 as p" require asserting without proof that sqrt(2) is irrational.

Why does the prime factorization of integers depend upon something that is, if not irrational, at least certainly not an integer? (Proof: 1^2 = 1, 2^2 = 4, and x <= x^2 by induction.)

comment by Jay_Schweikert · 2012-09-02T17:48:43.816Z · LW(p) · GW(p)

Qhorin Halfhand: The Watch has given you a great gift. And you only have one thing to give in return: your life.

Jon Snow: I'd gladly give my life.

Qhorin Halfhand: I don’t want you to be glad about it! I want you to curse and fight until your heart’s done pumping.

--Game of Thrones, Season 2.

Replies from: Rhwawn, Ezekiel
comment by Rhwawn · 2012-09-03T00:20:30.057Z · LW(p) · GW(p)

Reminds me of Patton:

No man ever won a war by dying for his country. Wars were won by making the other poor bastard die for his. You don't win a war by dying for your country.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-09-05T22:37:28.922Z · LW(p) · GW(p)

I especially like the way he calls the enemy "the other poor bastard". And not, say, "the bastard".

comment by Ezekiel · 2012-09-02T22:54:07.820Z · LW(p) · GW(p)

And you only have one thing to give in return: your life.

Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T05:25:55.739Z · LW(p) · GW(p)

He has magic powers?

Replies from: Ezekiel
comment by Ezekiel · 2012-09-03T05:52:19.664Z · LW(p) · GW(p)

Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre.

comment by Ezekiel · 2012-09-02T01:16:02.972Z · LW(p) · GW(p)

My brain technically-not-a-lies to me far more than it actually lies to me.

-- Aristosophy (again)

comment by lukeprog · 2012-09-09T23:56:38.982Z · LW(p) · GW(p)

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide.

Charles Handy describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-08T08:41:56.026Z · LW(p) · GW(p)

The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:

If dying after a billion years doesn't sound sad to you, it's because you lack a thousand-year-old brain that can make trillion-year plans.

— Aristosophy

One wish can achieve as much as you want. What the genie is really offering is three rounds of feedback.

— Aristosophy

If anyone objects to this policy response, please PM me so as to not feed the troll.

Replies from: wedrifid, Document, army1987, Incorrect
comment by wedrifid · 2012-09-08T09:31:05.633Z · LW(p) · GW(p)

The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:

Defection too far. Ban Will.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-09-08T18:08:32.372Z · LW(p) · GW(p)

Will is a cute troll.

Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places.

Replies from: wedrifid
comment by wedrifid · 2012-09-08T18:26:02.336Z · LW(p) · GW(p)

Will is a cute troll.

I've heard this claimed.

This behavior isn't cute.

Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places.

This would be somewhat in fitting with findings in Cialdini. One defector kept around and visibly punished or otherwise looking low status is effective at preventing that kind of behavior. (If not Cialdini, then Greene. Probably both.)

Replies from: Incorrect, Incorrect
comment by Incorrect · 2012-09-08T22:20:25.033Z · LW(p) · GW(p)

This behavior isn't cute.

Yes it is, and not just a little bit.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-08T22:48:44.344Z · LW(p) · GW(p)

The deliberate sabotage of threads? How cute will it be if he destroys the whole forum?

comment by Incorrect · 2012-09-09T01:37:12.385Z · LW(p) · GW(p)

It's really mean to say someone isn't cute and although this entire thread isn't very productive I find it mean that my comment rejecting the meanness to WN was selectively deleted.

Replies from: wedrifid
comment by wedrifid · 2012-09-09T03:45:55.818Z · LW(p) · GW(p)

It's really mean to say someone isn't cute

Alternately, it is toxic to describe trolling behavior as 'cute' when it isn't, and hasn't been either cute or particularly witty or intelligent in a long time. This. Behavior. Is. Not. Cute.. It is lame.

Replies from: Incorrect
comment by Incorrect · 2012-09-09T03:53:51.824Z · LW(p) · GW(p)

I'd rather live in a world where even if we disagree with each other, annoy each other, or waste each other's time we still don't say anybody isn't cute.

The opposite of cute is disgusting and is not a concept that should be applied to humans.

Replies from: TimS, wedrifid
comment by TimS · 2012-09-09T04:08:36.335Z · LW(p) · GW(p)

Example 54084954 of that fact that true-seeking and politeness are not correlated.

Also, a little fallacy of gray. Someone could be zero on the cute/disgusting scale, if even if it were so awful to label them disgusting.

Replies from: Incorrect
comment by Incorrect · 2012-09-09T04:09:26.394Z · LW(p) · GW(p)

Cuteness is a subjective evaluation, a way to interpret reality, not a fact.

comment by wedrifid · 2012-09-09T03:56:46.066Z · LW(p) · GW(p)

I'd rather live in a world where even if we disagree with each other, annoy each other, or waste each other's time we still don't say anybody isn't cute.

There is a difference between rejecting a "Will is a cute troll" meme being used to justify sock-puppet bait-and-switch abuse---by specifically referring to the behavior being not-cute---and simply saying that someone is not cute apropos of nothing. Your equivocation is either disingenuous or just silly.

Replies from: Alicorn, Incorrect
comment by Alicorn · 2012-09-09T04:12:04.813Z · LW(p) · GW(p)

I have begun to suspect that Incorrect is a Will sockpuppet. Please cease to feed.

Replies from: wedrifid, Incorrect
comment by wedrifid · 2012-09-09T04:41:35.945Z · LW(p) · GW(p)

I have begun to suspect that Incorrect is a Will sockpuppet.

The thought crossed my mind when the edit to the ancestor made it clear that Incorrect was trolling rather than well meaning yet confused. I clicked "close" rather than "comment" on the "It looks like Incorrect is Will, I'm not going to feed him here or elsewhere", for obvious reasons.

Please cease to feed.

Please exterminate.

comment by Incorrect · 2012-09-09T04:18:18.036Z · LW(p) · GW(p)

Here's a conversation I had with Will a while back:

http://lesswrong.com/lw/cw1/open_problems_related_to_solomonoff_induction/6rlr?context=1#6rlr

Replies from: khafra
comment by khafra · 2012-09-12T12:54:23.339Z · LW(p) · GW(p)

But surely you agree that tricking people into saying "I think Will is Incorrect" is exactly the sort of thing that would amuse him?

Replies from: Kindly
comment by Kindly · 2012-09-12T12:57:55.525Z · LW(p) · GW(p)

This had better not start a trend of suspecting people with adjectival usernames to be sockpuppets.

Replies from: Alicorn
comment by Alicorn · 2012-09-12T16:26:07.392Z · LW(p) · GW(p)

Yours could also be interpreted as an adverb.

comment by Incorrect · 2012-09-09T04:04:00.150Z · LW(p) · GW(p)

Even calling someone's behavior non-cute is mean. Even meanness is cute. Once you start calling humans or the things they do non-cute you open the door to finding humans disgusting.

Even if we were to assume his behavior was trollish, damaging to lesswrong, and/or unproductive, that shouldn't make it non-cute.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-12T09:44:27.008Z · LW(p) · GW(p)

By that same argument murder is cute, rape is cute, arson is cute, genocide is cute -- and you prefer to live in a world where people call these things cute than in a world where they call them non-cute.

You're using the word "cute" wrongly.

comment by Document · 2012-09-09T04:18:34.842Z · LW(p) · GW(p)

Edited how?

Replies from: Davorak
comment by Davorak · 2012-09-12T14:25:03.755Z · LW(p) · GW(p)

If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."

Replies from: adamisom, MBlume
comment by adamisom · 2012-12-11T23:03:14.328Z · LW(p) · GW(p)

That is cute.. no? More childish than evil. He should just be warned that's trolling.

There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.

comment by MBlume · 2012-09-17T00:31:00.088Z · LW(p) · GW(p)

It was edited to add something like "Will Newsome is such a badass" -- Socrates

comment by A1987dM (army1987) · 2012-09-09T08:40:12.185Z · LW(p) · GW(p)

I do find some of Will Newsome's contributions interesting. OTOH, this behaviour is pretty fucked up. (I was wondering how hard it would be to implement a software feature to show the edit history of comments.)

comment by Incorrect · 2012-09-08T18:27:10.152Z · LW(p) · GW(p)

If dying after a billion years doesn't sound sad to you, it's because you lack a thousand-year-old brain that can make trillion-year plans.

If only the converse were true...

Replies from: Hawisher
comment by Hawisher · 2012-09-17T02:04:16.905Z · LW(p) · GW(p)

"...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"?

I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you?

Replies from: nshepperd, Incorrect
comment by nshepperd · 2012-09-17T04:08:31.714Z · LW(p) · GW(p)

"If you lack a thousand-year-old brain that can make trillion-year plans, it's because dying after a billion years doesn't sound sad to you."

I think meaning it's unfortunate that thinking that dying after a billion years is sad doesn't by itself give you the power to live that long. Maybe.

Replies from: Hawisher
comment by Hawisher · 2012-09-17T17:28:39.008Z · LW(p) · GW(p)

I was never one for formal logic, but isn't that the contrapositive? I was under the impression that the converse of p then q was q then p.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-18T02:04:35.893Z · LW(p) · GW(p)

I was under the impression that the converse of p then q was q then p.

Yes and that's what nshepperd wrote.

Replies from: Hawisher
comment by Hawisher · 2012-09-18T19:27:27.091Z · LW(p) · GW(p)

Oh wow, never mind. My brain was temporarily broken. Is it considered bad etiquette here to retract incorrect comments?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-19T04:15:41.933Z · LW(p) · GW(p)

When you retract the comment is simply struck-through not deleted, so no.

comment by Incorrect · 2012-09-17T04:19:29.423Z · LW(p) · GW(p)

Are you saying that dying after a billion years sounds sad to you?

And therefore you would have a thousand-year-old brain that can make trillion-year plans.

Replies from: MugaSofer
comment by MugaSofer · 2012-10-01T11:58:44.577Z · LW(p) · GW(p)

Seems legit.

comment by imaxwell · 2012-09-03T22:01:52.552Z · LW(p) · GW(p)

The only road to doing good shows, is doing bad shows.

  • Louis C.K., on Reddit
Replies from: Desrtopa
comment by Desrtopa · 2012-09-04T17:36:56.219Z · LW(p) · GW(p)

Unfortunately, doing bad shows is not only a route to doing good shows.

Replies from: imaxwell
comment by imaxwell · 2012-09-05T17:28:45.446Z · LW(p) · GW(p)

True, and I hope no one thinks it is. So we can conclude that doing bad shows at first is not a strong indicator of whether you have a future as a showman.

I guess I see the quote as being directed at people who are so afraid of doing a bad show that they'll never get in enough practice to do a good show. Or they practice by, say, filming themselves telling jokes in their basement and getting critiques from their friends who will not be too mean to them. In either case, they never get the amount of feedback they would need to become good. For such a person to hear "Yes, you will fail" can be oddly liberating, since it turns failure into something accounted for in their longer-term plans.

comment by ChrisHallquist · 2012-09-03T06:22:54.356Z · LW(p) · GW(p)

“Why do you read so much?”

Tyrion looked up at the sound of the voice. Jon Snow was standing a few feet away, regarding him curiously. He closed the book on a finger and said, “Look at me and tell me what you see.”

The boy looked at him suspiciously. “Is this some kind of trick? I see you. Tyrion Lannister.”

Tyrion sighed. “You are remarkably polite for a bastard, Snow. What you see is a dwarf. You are what, twelve?”

“Fourteen,” the boy said.

“Fourteen, and you’re taller than I will ever be. My legs are short and twisted, and I walk with difficulty. I require a special saddle to keep from falling off my horse. A saddle of my own design, you may be interested to know. It was either that or ride a pony. My arms are strong enough, but again, too short. I will never make a swordsman. Had I been born a peasant, they might have left me out to die, or sold me to some slaver’s grotesquerie. Alas, I was born a Lannister of Casterly Rock, and the grotesqueries are all the poorer. Things are expected of me. My father was the Hand of the King for twenty years. My brother later killed that very same king, as it turns out, but life is full of these little ironies. My sister married the new king and my repulsive nephew will be king after him. I must do my part for the honor of my House, wouldn’t you agree? Yet how? Well, my legs may be too small for my body, but my head is too large, although I prefer to think it is just large enough for my mind. I have a realistic grasp of my own strengths and weaknesses. My mind is my weapon. My brother has his sword, King Robert has his warhammer, and I have my mind… and a mind needs books as a sword needs a whetstone, if it is to keep its edge.” Tyrion tapped the leather cover of the book. “That’s why I read so much, Jon Snow.”

--George R. R. Martin, A Game of Thrones

Replies from: Plubbingworth, ArisKatsaris
comment by Plubbingworth · 2012-09-12T15:37:05.718Z · LW(p) · GW(p)

I'm surprised at how often I have to inform people of this... I have mild scoliosis, and so I usually prefer sitting down and kicking up my feet, usually with my work in hand. Coming from a family who appreciates backbreaking work is rough when the hard work is even harder and the pain longer-lasting... which would be slightly more bearable if the aforementioned family did not see reading MYSTERIOUS TEXTS on a Kindle and using computers for MYSTERIOUS PURPOSES as signs of laziness and devotion to silly frivolities.

I have a sneaking suspicion that this is not a very new situation.

comment by ArisKatsaris · 2012-09-03T08:37:29.317Z · LW(p) · GW(p)

I think the quote could be trimmed to its last couple sentences and still maintain the relevant point..

Replies from: RobinZ, ChrisHallquist
comment by RobinZ · 2012-09-03T16:22:57.486Z · LW(p) · GW(p)

I disagree, in fact. That books strengthen the mind is baldly asserted, not supported, by this quote - the rationality point I see in it is related to comparative advantage.

comment by ChrisHallquist · 2012-09-03T08:43:42.116Z · LW(p) · GW(p)

Oh, totally. But I prefer the full version; it's really a beautifully written passage.

comment by AlexMennen · 2012-09-04T02:11:40.108Z · LW(p) · GW(p)

Discovery is the privilege of the child, the child who has no fear of being once again wrong, of looking like an idiot, of not being serious, of not doing things like everyone else.

Alexander Grothendieck

Replies from: Fyrius, bbleeker
comment by Fyrius · 2012-09-12T14:47:12.645Z · LW(p) · GW(p)

...screw it, I'm not growing up.

comment by Sabiola (bbleeker) · 2012-09-18T10:04:57.066Z · LW(p) · GW(p)

I remember being very much afraid of all those things as a child. I'm getting better now.

comment by simplicio · 2012-09-01T16:06:40.524Z · LW(p) · GW(p)

...a good way of thinking about minimalism [about truth] and its attractions is to see it as substituting the particular for the general. It mistrusts anything abstract or windy. Both the relativist and the absolutist are impressed by Pilate's notorious question 'What is Truth?', and each tries to say something useful at the same high and vertiginous level of generality. The minimalist can be thought of turning his back on this abstraction, and then in any particular case he prefaces his answer with the prior injunction: you tell me. This does not mean, 'You tell me what truth is.' It means, 'You tell me what the issue is, and I will tell you (although you will already know, by then) what the truth about the issue consists in.' If the issue is whether high tide is at midday, then truth consists in high tide being at midday... We can tell you what truth amounts to, if you first tell us what the issue is.

There is a very powerful argument for minimalism about truth, due to the great logician Gottlob Frege. First, we should notice the transparency property of truth. This is the fact that it makes no difference whether you say that it is raining, or it is true that it is raining, or true that it is true that it is raining, and so on forever. But if 'it is true that' introduced some substantial, robust property of a judgment, how could this be so? Consider, for example, a pragmatism that attempts some equation between truth and utility. Then next to the judgment 'it is raining' we might have 'it is useful to believe that it is raining.' But these are entirely different things! To assess the first we direct our attention to the weather. To assess the second we direct our attention to the results of believing something about the weather - a very different investigation.

Let us return to Pilate. Where does minimalism about truth leave him? It suggests that when he asked this question, he was distracting himself and his audience from his real job, which was to find out whether to uphold certain specific historical charges against a defendant. Thus, if I am innocent, and I come before a judge, I don't want airy generalities about the nature of truth. I want him to find that I did not steal the watch if I did not steal the watch. I want him to rub his nose in the issue. I want a local judgment about a local or specific event, supposed to have happened in a particular region of time and space.

(Simon Blackburn, Truth)

Replies from: Alejandro1, buybuydandavis
comment by Alejandro1 · 2012-09-01T17:14:07.460Z · LW(p) · GW(p)

The pithiest definition of Blackburn's minimalism I've read is in his review of Nagel's The Last Word:

We can see why this is so if we put it in terms of what we can call Ramsey’s ladder. This takes us from p to it is true that p, to it is really true that p, to it is really a fact that it is true that p, and if we like to it is really a fact about the independent order of things ordained by objective Platonic normative structures with which we resonate in harmony that it is true that p. For the metatheoretical minimalist, Ramsey’s ladder is horizontal. The view from the top is just the same as the view from the bottom, and the view is p.

It is followed by an even pithier response to how Nagel refutes relativism (pointing that our first-order conviction that 2+2=4 or that murder is wrong is more certain than any relativist doubts) and thinks that this establishes a quasi-Platonic absolutism as the only alternative:

This is… taking advantage of the horizontal nature of Ramsey’s ladder to climb it, and then announce a better view from the top.

comment by buybuydandavis · 2012-09-03T11:12:35.617Z · LW(p) · GW(p)

"What is truth" is a pretty good question, though a better one is "what do we do with truths?"

We do a lot of things with truths, it can serve a lot of different functions. The problem comes where people doing different things with their truths talk to each other.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-01T19:56:47.219Z · LW(p) · GW(p)

"Nontrivial measure or it didn't happen." -- Aristosophy

(Who's Kate Evans? Do we know her? Aristosophy seems to have rather a lot of good quotes.)

Replies from: Alicorn, Will_Newsome
comment by Alicorn · 2012-09-01T20:08:35.236Z · LW(p) · GW(p)

*cough*

"I made my walled garden safe against intruders and now it's just a walled wall." -- Aristosophy

Replies from: RomanDavis
comment by RomanDavis · 2012-09-01T22:30:40.650Z · LW(p) · GW(p)

Attachment? This! Is! SIDDHARTHA!

Is that you? That's ingenious.

For more rational flavor:

Live dogmatic, die wrong, leave a discredited corpse.

This should be the summary for entangled truths:

To find the true nature of a thing, find the true nature of all other things and look at what is left over.

how to seem and be deep:

Blessed are those who can gaze into a drop of water and see all the worlds and be like who cares that's still zero information content.

Dark Arts:

The master said: "The master said: "The master said: "The master said: "There is no limit to the persuasive power of social proof.""""

More Dark arts:

One wins a dispute, not by minimising potential counterarguments' plausibility, but by maximising their length.

Luminosity:

Have you accepted your brain into your heart?

Replies from: Alicorn
comment by Alicorn · 2012-09-01T22:42:11.409Z · LW(p) · GW(p)

No, I'm not her. I don't know who she is, but her Twitter is indeed glorious. (And Google Reader won't let me subscribe to it the way I'm subscribed to other Twitters, rar.)

Replies from: RomanDavis, Unnamed
comment by RomanDavis · 2012-09-01T22:51:42.434Z · LW(p) · GW(p)

She's got to be from here, here's learning biases can hurt people:

Heuristics and biases research: gaslighting the human race?

Cryonics:

"Are you signed up for Christonics?" "No, I'm still prochristinating."

I'm starting to think this is someone I used to know from tvtropes.

comment by Unnamed · 2012-09-02T07:24:06.828Z · LW(p) · GW(p)

Twitter RSS feed

Replies from: Pluvialis, Alicorn
comment by Pluvialis · 2012-09-05T14:54:06.680Z · LW(p) · GW(p)

I can confirm this works for me, in Google Reader (and my thanks to you also).

comment by Alicorn · 2012-09-02T17:16:03.330Z · LW(p) · GW(p)

I found that, but it won't let me subscribe to it with Google Reader, only with other things I don't use.

Replies from: palladias, Unnamed
comment by palladias · 2012-09-04T02:41:57.389Z · LW(p) · GW(p)

This worked for me in Google Reader: http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=aristosophy

Replies from: Alicorn
comment by Alicorn · 2012-09-04T06:27:52.469Z · LW(p) · GW(p)

I get the same results as the other one above.

Replies from: palladias
comment by palladias · 2012-09-04T14:37:34.278Z · LW(p) · GW(p)

Weird. You can try the last solution here: http://saravananthirumuruganathan.wordpress.com/2011/05/29/how-to-follow-a-twitter-account-feed-using-rss-reader-in-new-twitter/. I didn't need it this time, but it worked the last time I was having Twitter + Reader problems.

Replies from: Alicorn
comment by Alicorn · 2012-09-04T16:50:54.860Z · LW(p) · GW(p)

I found that by Googling before mentioning it here and tried all three things. They don't work. I'm super-confused.

comment by Unnamed · 2012-09-02T20:05:04.579Z · LW(p) · GW(p)

That's odd - I did subscribe to it with Google Reader, right before I posted the link.

Replies from: Alicorn
comment by Alicorn · 2012-09-02T20:12:02.364Z · LW(p) · GW(p)

My bookmarklet says "can't find a feed", and the dropdown menu doesn't offer Google Reader as far as I can tell. How did you do it?

Replies from: Unnamed
comment by Unnamed · 2012-09-02T21:41:17.547Z · LW(p) · GW(p)

"Google" was auto-selected in my dropdown menu, so it was straightforward for me, same as always. Two clicks, one on Subscribe, the second to indicate Google Reader rather than Google Homepage.

Not sure how much troubleshooting help I can give you. Does that page at least show the recent tweets? Are you logged into Google? Maybe try going to your Google Reader page and entering the url there in the subscribe-to-a-new-feed place?

Replies from: Alicorn
comment by Alicorn · 2012-09-02T21:50:24.097Z · LW(p) · GW(p)

Does that page at least show the recent tweets?

Yes.

Are you logged into Google?

Yes.

Maybe try going to your Google Reader page and entering the url there in the subscribe-to-a-new-feed place?

Doesn't work, says it can't find it. I don't know why; this is how I've subscribed to Twitters in the past.

comment by Will_Newsome · 2012-09-01T21:25:55.059Z · LW(p) · GW(p)

hey guys fun fact vivid is me

comment by Daniel_Burfoot · 2012-09-01T15:57:48.651Z · LW(p) · GW(p)

It is now clear to us what, in the year 1812, was the cause of the destruction of the French army. No one will dispute that the cause of the destruction of Napoleon's French forces was, on the one hand, their advance late in the year, without preparations for a winter march, into the depths of Russia, and, on the other hand, the character that the war took on with the burning of Russian towns and the hatred of the foe aroused in the Russian people. But then not only did no one foresee (what now seems obvious) that this was the only way that could lead to the destruction of an army of eight hundred thousand men, the best in the world and led by the best generals, in conflict with a twice weaker Russian army, inexperienced and led by inexperienced generals; not only did no one foresee this, but all efforts on the part of the Russians were constantly aimed at hindering the one thing that could save Russia, and, on the part of the French, despite Napoleon's experience and so-called military genius, all efforts were aimed at extending as far as Moscow by the end of summer, that is, at doing the very thing that was to destroy them.

  • Leo Tolstoy, "War and Peace", trans. Pevear and Volokhonsky
Replies from: SisterY
comment by SisterY · 2012-09-05T18:25:44.610Z · LW(p) · GW(p)

"Possibly the best statistical graph ever drawn" http://www.edwardtufte.com/tufte/posters

comment by mrglwrf · 2012-09-11T19:00:03.693Z · LW(p) · GW(p)

You know those people who say "you can use numbers to show anything" and "numbers lie" and "I don't trust numbers, don't give me numbers, God, anything but numbers"? These are the very same people who use numbers in the wrong way.

"Junior", FIRE JOE MORGAN

comment by Zvi · 2012-09-01T21:10:38.063Z · LW(p) · GW(p)

Subway ad: "146 people were hit by trains in 2011. 47 were killed."

Guy on Subway: "That tells me getting hit by a train ain't that dangerous."

  • Nate Silver, on his Twitter feed @fivethirtyeight
Replies from: Grognor, army1987, DanielLC
comment by Grognor · 2012-09-04T01:26:39.363Z · LW(p) · GW(p)

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

Replies from: RobinZ
comment by RobinZ · 2012-09-04T01:53:16.650Z · LW(p) · GW(p)

Actually, according to Wikipedia, only 35 out of the 97 people aboard were killed. Not enough to kill even 50% of them.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-04T22:16:37.269Z · LW(p) · GW(p)

jaw drops

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-04T22:56:44.624Z · LW(p) · GW(p)

It helps to remember that the Hindenburg was more or less parked when it exploded... I think it was like 30 feet in the air? (I'm probably wrong about the number, but I don't think I'm very wrong.) Most of the passengers basically jumped off. And, sure, a 30 foot drop is no walk in the park, but it's not that surprising that most people survive it.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-05T04:44:38.722Z · LW(p) · GW(p)

(Well, then “out of the sky” is kind of an exaggeration, since you wouldn't normally consider yourself to be in the sky when on a balcony on the fourth floor.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-05T13:11:11.736Z · LW(p) · GW(p)

Well, unlike the balcony of a building, a floating blimp (even close to the ground) is floating, rather than resting on the ground, so I suppose one could make the argument. But yeah, I'm inclined to agree that wherever "the sky" is understood to be, and I accept that this is a social construct rather than a physical entity, it's at least a hundred feet or so above ground

comment by A1987dM (army1987) · 2012-09-02T00:27:26.900Z · LW(p) · GW(p)

Wait, 32% probability of dying “ain't that dangerous”? Are you f***ing kidding me?

Replies from: None
comment by [deleted] · 2012-09-02T00:37:46.043Z · LW(p) · GW(p)

If I expect to be hit by a train, I certainly don't expect a ~68% survival chance. Not intuitively, anyways.

Replies from: radical_negative_one, army1987, faul_sname
comment by radical_negative_one · 2012-09-02T16:25:22.778Z · LW(p) · GW(p)

I'm guessing that even if you survive, your quality of life is going to take a hit. Accounting for this will probably bring our intuitive expectation of harm closer to the actual harm.

comment by A1987dM (army1987) · 2012-09-02T22:20:58.887Z · LW(p) · GW(p)

Hmmm, I can't think of any way of figuring out what probability I would have guessed if I had to guess before reading that. Damn you, hindsight bias!

(Maybe you could spell out and rot-13 the second figure in the ad...)

comment by faul_sname · 2012-09-02T23:20:29.460Z · LW(p) · GW(p)

I would expect something like that chance. Being hit by a train will be very similar to landing on your side or back after falling 3 to 10 meters (I'm guessing most people hit by trains are at or near a train station, so the impacts will be relatively slow). So the fatality rate should be similar.

Of course, that prediction gives a fatality rate of only 5-20%, so I'm probably missing something.

Replies from: khafra
comment by khafra · 2012-09-03T00:36:48.131Z · LW(p) · GW(p)

There's the whole crushing and high voltage shock thing, depending on how you land.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-03T22:49:59.599Z · LW(p) · GW(p)

high voltage shock thing

Well, lightning strikes kill less than half the people they hit.

Replies from: mfb
comment by mfb · 2012-09-04T13:17:37.001Z · LW(p) · GW(p)

Lightning strikes usually do not involve physical impacts - I think "falling from 3-10 meters and getting struck by lightning" would be worse. In addition, the length of the current flow depends on the high voltage system.

Replies from: wedrifid
comment by wedrifid · 2012-09-04T13:48:45.088Z · LW(p) · GW(p)

Lightning strikes usually do not involve physical impacts - I think "falling from 3-10 meters and getting struck by lightning" would be worse.

This seems overwhelmingly likely.

comment by DanielLC · 2012-09-11T02:41:25.090Z · LW(p) · GW(p)

I can't help but think:

Subway ad: "146 people were hit by trains in 2011. 47 were killed."

Guy at Subway: "What does that have to do with sandwiches?"

comment by JQuinton · 2012-09-11T14:01:34.200Z · LW(p) · GW(p)

"If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius

Replies from: Sarokrae
comment by Sarokrae · 2012-09-18T05:49:06.919Z · LW(p) · GW(p)

...If your plan is for eternity, invent FAI?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-18T13:53:02.734Z · LW(p) · GW(p)

Depends how you interpret the proverb. If you told me the Earth would last a hundred years, it would increase the immediate priority of CFAR and decrease that of SIAI. It's a moot point since the Earth won't last a hundred years.

Replies from: None
comment by [deleted] · 2012-09-18T13:57:50.536Z · LW(p) · GW(p)

Sorry, Earth won't last a hundred years?

Replies from: MugaSofer, Mitchell_Porter, TheOtherDave, army1987
comment by MugaSofer · 2012-09-18T14:01:51.206Z · LW(p) · GW(p)

Nanotech and/or UFAI.

comment by Mitchell_Porter · 2012-09-18T23:04:28.128Z · LW(p) · GW(p)

The idea seems to be that even if there is a friendly singularity, Earth will be turned into computronium or otherwise transformed.

comment by TheOtherDave · 2012-09-18T15:21:48.304Z · LW(p) · GW(p)

I am surprised that this claim surprises you. A big part of SI's claimed value proposition is the idea that humanity is on the cusp of developing technologies that will kill us all if not implemented in specific ways that non-SI folk don't take seriously enough.

Replies from: None
comment by [deleted] · 2012-09-18T23:01:14.914Z · LW(p) · GW(p)

Of course you're right. I guess I haven't noticed the topic come up here for a while, and haven't seen the apocalypse predicted so straightforwardly (and quantitatively) before so am surprised in spite of myself.

Although, in context, it sounds like EY is saying that the apocalypse is so inevitable that there's no need to make plans for the alternative. Is that really the consensus at EY's institute?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-19T01:15:13.664Z · LW(p) · GW(p)

I have no idea what the consensus at SI is.

comment by A1987dM (army1987) · 2012-09-18T17:32:25.190Z · LW(p) · GW(p)

I guess he means “only last a hundred years”, not “last at least a hundred years”.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-18T17:42:46.499Z · LW(p) · GW(p)

Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes?

If so, can you clarify how you arrive at that interpretation?

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-18T17:50:15.597Z · LW(p) · GW(p)

“If you told me the Earth would only last a hundred years (i.e. won't last longer than that) .... It's a moot point since the Earth won't only last a hundred year (i.e. it will last longer).” At least that's what I got on the first reading.

I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-18T17:58:35.128Z · LW(p) · GW(p)

The idea is that if Earth lasts at least a hundred years, (if that's a given), then the possibility of a uFAI in that timespan severely decreases -- so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public's rationality for the future generations, so that the future generations don't launch a uFAI.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-18T18:10:11.703Z · LW(p) · GW(p)

(The other interpretation would be “If the Earth is going to only last a hundred years, then there's not much point in trying to make a FAI since in the long-term we're screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.)

EDIT: Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me -- even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).

Replies from: Eliezer_Yudkowsky, ArisKatsaris, Decius
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-19T15:14:24.105Z · LW(p) · GW(p)

I was just using "Earth" as a synonym for "the world as we know it".

Replies from: MixedNuts, ciphergoth, army1987
comment by MixedNuts · 2012-09-19T17:18:46.785Z · LW(p) · GW(p)

I think I disagree; care to make it precise enough to bet on? I'm expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.

I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.

How's that sound? All of the above is up for negotiation.

Replies from: Eliezer_Yudkowsky, wedrifid, Mitchell_Porter, MugaSofer
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-26T23:01:32.619Z · LW(p) · GW(p)

As wedifrid says, this is a no-brainer "accept" (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I'll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable - I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong - I would probably find $100,000 much more tempting.)

Replies from: MixedNuts
comment by MixedNuts · 2012-09-27T09:55:56.087Z · LW(p) · GW(p)

PayPal-ed to sentience at pobox dot com.

Don't worry, my only debitor who pays higher interest rates than that is my bank. As long as that's not my main liquidity bottleneck I'm happy to follow medieval morality on lending.

If you publish transaction data to confirm the bet, please remove my legal name.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-27T15:20:28.758Z · LW(p) · GW(p)

Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.

Replies from: gwern
comment by gwern · 2012-09-27T19:46:13.513Z · LW(p) · GW(p)

Bet recorded: LW bet registry, PB.com.

comment by wedrifid · 2012-09-20T10:36:18.538Z · LW(p) · GW(p)

I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.

(Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept).

How's that sound?

Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan.

Even once the above is accounted for Eliezer should still accept the bet (in principle).

Replies from: MixedNuts
comment by MixedNuts · 2012-09-20T10:55:49.695Z · LW(p) · GW(p)

Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting "La la, can't hear you" at discounting effects.

comment by Mitchell_Porter · 2012-09-26T23:32:34.732Z · LW(p) · GW(p)

I'm expecting ...

That's a nice set of criteria by which to distinguish various futures (and futurists).

comment by MugaSofer · 2012-09-27T11:43:34.534Z · LW(p) · GW(p)

I'm expecting [...] some people dying of disease or old age or in wars

Care to explain why? You sound like you expect nanotech by then.

Replies from: MixedNuts
comment by MixedNuts · 2012-09-27T12:56:24.689Z · LW(p) · GW(p)

I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn't be floored by them being solved in theory (though it does sound hard). What I don't expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn't be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.

The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn't quite enough for Somalia to get its act together. If there's a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won't regret that $100 much.

I have no idea what wars will look like, but I don't expect them to be nonexistent or nonlethal. Given no game-changer, socioeconomic factors vary too slowly to remove incentive for war. Straightforward tech applications (get a superweapon, get a superdefense, give everyone a superweapon, etc.) get you very different war strategies, but not world peace. If you do something really clever like world government nobody's unhappy with, arms-race-proof shields for everyone, or mass Gandhification, then I have happily lost.

Replies from: MugaSofer
comment by MugaSofer · 2012-09-28T07:56:37.319Z · LW(p) · GW(p)

Thanks for explaining!

Of course, nanotech could be self replicating and thus exponentially cheap, but the likelihood of that is ... debatable.

comment by Paul Crowley (ciphergoth) · 2012-09-20T06:41:05.787Z · LW(p) · GW(p)

I feel an REM song coming on...

comment by A1987dM (army1987) · 2012-09-19T17:20:43.088Z · LW(p) · GW(p)

(I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.)

comment by ArisKatsaris · 2012-09-18T19:32:58.780Z · LW(p) · GW(p)

Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built

EY does seem in a darker mood than usual lately, so it wouldn't surprise me to see him implying pessimism about our chances out loud, even if it doesn't go so far as "admitting defeat". I do hope it's just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)

Replies from: Decius
comment by Decius · 2012-09-26T18:04:47.267Z · LW(p) · GW(p)

"The world as we know it" ends if FAI is released into the wild.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-27T15:51:00.445Z · LW(p) · GW(p)

When I had commented, EY hadn't clarified yet that by Earth he meant “the world as we know it”, so I didn't expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’.

Replies from: Decius
comment by Decius · 2012-09-27T18:29:55.490Z · LW(p) · GW(p)

50 years after a self-improving AI is released into the wild, I don't expect Venus and Mars to be in their present orbits. I expect that they would be gradually moving towards being in the same orbit that the Earth is moving towards (or is already established in) 120 degrees apart, propelled by a rocket which uses large reflectors in space to heat portion of the surface of the planet, which is then forced to jet in the desired vector at escape velocity. ETA: That would mean the removal of three objects from the list of planets of Sol.

I think it will only be a few hundred years after FAI before interplanetary travel requires routine 'take your shoes off' type of screening.

Replies from: None, MixedNuts, None, TheOtherDave, Eugine_Nier
comment by [deleted] · 2012-09-27T18:31:10.395Z · LW(p) · GW(p)

We'll still have shoes? And terrorists? I'm disappointed in advance.

Replies from: Decius
comment by Decius · 2012-09-27T18:44:45.595Z · LW(p) · GW(p)

And even the right and ability (if we currently have it) to make choices, and some privacy!

comment by MixedNuts · 2012-09-27T18:56:12.088Z · LW(p) · GW(p)

IMHO you're being provincial. Your intuitions for interplanetary travel come directly from flying in the US; if you were used to saner policies you'd make different predictions. (If you're not from North America, I am very confused.)

Replies from: Dolores1984
comment by Dolores1984 · 2012-09-27T19:51:49.635Z · LW(p) · GW(p)

Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.

Replies from: Decius
comment by Decius · 2012-09-28T03:31:22.241Z · LW(p) · GW(p)

How quickly do you think humans will give up commuting?

comment by [deleted] · 2012-09-27T19:55:09.819Z · LW(p) · GW(p)

Why would you put them into an inherently dynamically-unstable configuration, position-corrected by a massive kludge? I mean, what's in it for the AI?

Replies from: Decius, TimS
comment by Decius · 2012-09-28T03:08:59.684Z · LW(p) · GW(p)

How about a dynamically stable one?

Oh, and roughly ten to twenty times the total available living space for humans, at an order-of-magnitude guess.

comment by TimS · 2012-09-27T21:05:44.627Z · LW(p) · GW(p)

If the AI is Friendly? The enhancement of humanity's utility/happiness/wealth - I assume terraforming is a lot easier if planets are near the middle of the water zone.

Replies from: None
comment by [deleted] · 2012-09-27T22:20:15.012Z · LW(p) · GW(p)

We don't know what it takes to terraform a world -- it's easy to go "well, it needs more water and air for starters," but that conceals an awful lot of complexity. Humans, talking populations thereof, can't live just anywhere. We don't even have a really good, working definition of what the "habitability" of a planet is, in a way that's more specific than "I knows it when I sees it." Most of the Earth requires direct cultural adaptation to be truly livable. There's no such thing as humans who don't use culture and technology to cope with the challenges posed by their environment.

Anyway, my point is more that your prediction suggests some cached premises: why should FAI do that particular thing? Why is that a more likely outcome than any of the myriad other possibilities?

Replies from: Decius
comment by Decius · 2012-09-28T03:11:01.688Z · LW(p) · GW(p)

I specifically mentioned that Earth's orbit would also be optimized- although the solar-powered jet engine concept has bigger downsides when used on an inhabited planet.

comment by TheOtherDave · 2012-09-27T21:06:48.567Z · LW(p) · GW(p)

ETA: That would mean the removal of three objects from the list of planets of Sol.

Do distinct planets necessarily have distinct orbits?

Replies from: Vaniver
comment by Vaniver · 2012-09-27T21:27:58.643Z · LW(p) · GW(p)

According to the modern definition, yes.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-27T21:46:43.168Z · LW(p) · GW(p)

Ah! I had read the wiki article on planets, which said "and has cleared its neighbouring region of planetesimals," and didn't bother to look up primary sources. I should know better. Thanks!

comment by Eugine_Nier · 2012-09-28T01:59:51.924Z · LW(p) · GW(p)

Not thinking very ambitious I see.

Replies from: Decius
comment by Decius · 2012-09-28T03:25:38.486Z · LW(p) · GW(p)

That's on the five-millennium plan.

comment by Decius · 2012-09-26T17:48:45.284Z · LW(p) · GW(p)

So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years.

There is something wrong.

comment by Richard_Kennaway · 2012-09-01T16:03:28.489Z · LW(p) · GW(p)

Nothing can be soundly understood
If daylight itself needs proof.

Imām al-Ḥaddād (trans. Moṣṭafā al-Badawī), "The Sublime Treasures: Answers to Sufi Questions"

Replies from: gwern
comment by gwern · 2012-09-01T17:25:57.616Z · LW(p) · GW(p)

Reminds me of Moore's "here is a hand" paradox (or one man's modus tollens is another's modus ponens).

Replies from: siodine, Jay_Schweikert
comment by siodine · 2012-09-02T18:00:59.525Z · LW(p) · GW(p)

Richard Carrier on solipsism, but not nearly as pithy:

Solipsism still requires an explanation for what you are cognating. There are only two logically possible explanations: random chance, or design.

It’s easy to show that the probability that your stream of consciousness is a product of random chance is absurdly low (see Bolzmann brains, for example). In simple form, if we assume no prior knowledge or assumptions (other than logic and our raw uninterpreted experience), the prior probability of solipsism becomes 0.5 but the likelihood of the evidence on solipsism is then vanishingly small (approaching zero), since chance events would sooner produce a relative chaos than an organized stream of complex consciousness, whereas the likelihood of that same evidence on a modest scientific realism is effectively 100%. Work the math and the probability of chance-based solipsism is necessarily vanishingly small (albeit not zero, but close enough for any concern). Conclusion: random solipsism would sooner produce a much weirder experience.

That leaves some sort of design hypothesis, namely your mind is cleverly making everything up, just so. Which requires your mind to be vastly more intelligent and resourceful and recollectful than you experience yourself being, since you so perfectly create a reality for yourself that remains consistent and yet that you can’t control with your mind. So you control absolutely everything, yet control next to nothing, a contradiction in terms, although an extremely convoluted system of hypotheses could eliminate that contradiction with some elaborate device explaining why your subconscious is so much more powerful and brilliant and consistent and mysterious than your conscious self is. The fact that you have to develop such a vastly complex model of how your mind works, just to get solipsism to make the evidence likely (as likely as it already is on modest scientific realism), necessarily reduces the prior probability by as much, and thus the probability of intelligent solipsism is likewise vanishingly small. Conclusion: intelligent solipsism would sooner result in your being more like a god, i.e. you would have vast or total control over your reality.

One way to think of the latter demarcation of prior probability space is similar to the thermodynamic argument against our having a Boltzmann brain: solipsism is basically a cartesian demon scenario, only the demon is you; so think of all the possible cartesian demons, from “you can change a few things but not all,” to “you can change anything you want,” and then you’ll see the set of all possible solipsistic states in which you would have obvious supernatural powers (the ability to change aspects of reality) is vastly larger than the set of all possible solipsistic states in which you can’t change anything except in exactly the same way as a modest scientific realism would produce. In other words, we’re looking at an incredible coincidence, where the version of solipsism that is realized just “happens” to be exactly identical in all observed effects to non-solipsism. And the prior probability space shared by that extremely rare solipsism is a vanishingly small fraction of all logically possible solipsisms. Do the math and the probability of an intelligent solipsism is vanishingly small.

This all assumes you have no knowledge making any version of solipsism more likely than another. And we are effectively in that state vis-a-vis normal consciousness. However we are not in that state vis-a-vis other states of consciousness, e.g. put “I just dropped acid” or “I am sleeping” in your background knowledge and that entails a much higher probability that you are in a solipsistic state, but then that will be because the evidence will be just as such a hypothesis would predict: reality starts conforming to your whim or behaving very weirdly in ways peculiar to your own desires, expectations, fears, etc. Thus “subjective” solipsism is then not a vanishingly small probability. But “objective” solipsism would remain so (wherein reality itself is a product of your solipsistic state), since for that to explain all the same evidence requires extremely improbable coincidences again, e.g. realism explains why you need specific conditions of being drugged or sleeping to get into such a state, and why everything that happens or changes in the solipsistic state turns out not to have changed or happened when you exit that state, and why the durations and limitations and side effects and so on all are as they are, whereas pure solipsism doesn’t come with an explanation for any of that, there in that case being no actual brain or chemistry or “other reality” to return to, and so on, so you would have to build all those explanations in to get objective solipsism to predict all the same evidence, and that reduces the prior. By a lot.

There is no logically consistent way to escape the conclusion that solipsism is exceedingly improbable.

Replies from: gwern, Mitchell_Porter
comment by gwern · 2012-09-02T23:23:49.517Z · LW(p) · GW(p)

I think that's actually a really terrible bit of arguing.

There are only two logically possible explanations: random chance, or design.

We can stop right there. If we're all the way back at solipsism, we haven't even gotten to defining concepts like 'random chance' or 'design', which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we're going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov's axioms (which brings us to the analytic and synthetic problem).

And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can't minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.

Yes, if you're going to buy into a (very large) number of materialist non-solipsist claims, then you're going to have trouble making a case in such terms for solipsism. But if you've bought all those materialist or externalist claims, you've already rejected solipsism and there's no tension in the first place. And he doesn't do a good case of explaining that at all.

Replies from: siodine
comment by siodine · 2012-09-03T01:11:27.113Z · LW(p) · GW(p)

Good points, but then likewise how do you define and import the designations of 'hand' or 'here' and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore -- you seem to be going metaphysical)? (or were you not referring to Moore's argument in the context of skepticism?)

Replies from: gwern
comment by gwern · 2012-09-05T02:21:14.766Z · LW(p) · GW(p)

I think Moore's basic argument works on the level of epistemic skepticism, yes, but also metaphysics: some sort of regular metaphysics and externalism is what one believes, and what provides the grist for the philosophical mill. If you don't credit the regular metaphysics, then why do you credit the reasoning and arguments which led you to the more exotic metaphysics?

I'm not sure what skeptical arguments it doesn't work for. I think it may stop at the epistemic level, but that may just be because I'm having a hard time thinking of any ethics examples (which is my usual interest on the next level down of abstraction).

Replies from: siodine
comment by siodine · 2012-09-05T20:36:23.574Z · LW(p) · GW(p)

The way I see it, Moore's argument gets you to where you're uncertain of the reasoning pro or contra skepticism. But If you start from the position of epistemic solipsism (I know my own mind, but I'm uncertain of the external world), then you have reason (more or less depending how uncertain you are) to side with common sense. However, if you start at metaphysical solipsism (I'm uncertain of my own mind), then such an argument could even be reason to not side with common sense (e.g., there are little people in my mind trying to manipulate my beliefs; I must not allow them to).

comment by Mitchell_Porter · 2012-09-17T00:15:54.336Z · LW(p) · GW(p)

So you control absolutely everything, yet control next to nothing, a contradiction in terms, although an extremely convoluted system of hypotheses could eliminate that contradiction with some elaborate device explaining why your subconscious is so much more powerful and brilliant and consistent and mysterious than your conscious self is.

A hypothesis like... I'm dreaming.

comment by Jay_Schweikert · 2012-09-02T17:39:39.247Z · LW(p) · GW(p)

This also made me think of the aphorism "if water sticks in your throat, with what will you wash it down?"

Replies from: gwern
comment by gwern · 2012-09-02T17:43:35.967Z · LW(p) · GW(p)

Or "if salt loses its savor", although I wonder if they're really making the same philosophical point about relative weights of evidence on two sides of a contradiction/paradox.

comment by Peter Wildeford (peter_hurford) · 2012-09-01T18:18:48.913Z · LW(p) · GW(p)

"In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live." -- Peter Singer

Replies from: prase, Desrtopa, Dolores1984
comment by prase · 2012-09-02T21:35:19.575Z · LW(p) · GW(p)

As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn't happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.

Replies from: Viliam_Bur, RobinZ, NancyLebovitz
comment by Viliam_Bur · 2012-09-03T09:18:02.655Z · LW(p) · GW(p)

I suspect it's because authors of "ethical remainders" are usually very bad at understanding human nature.

What they essentially do is associate "ethical" with "unpleasant", because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it's bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.

But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these "ethical remainders" contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don't deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your "elephant") that you can't.

Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.

Replies from: prase
comment by prase · 2012-09-03T20:41:51.048Z · LW(p) · GW(p)

Yes. And it works this way even without insisting that more can be done; even if you live up to the demands, or even if the moral preachers recognise your right to be happy sometimes, the warm feeling from doing good is greatly diminished when you are told that philantrophy is just being expected, that helping others is not something one does naturally with joy, but that it should be a conscious effort, a hard work, to be done properly.

comment by RobinZ · 2012-09-02T22:31:15.335Z · LW(p) · GW(p)

xkcd reference.

Not to mention the remarks of Mark Twain on a fundraiser he attended once:

Well, Hawley worked me up to a great state. I couldn't wait for him to get through [his speech]. I had four hundred dollars in my pocket. I wanted to give that and borrow more to give. You could see greenbacks in every eye. But he didn't pass the plate, and it grew hotter and we grew sleepier. My enthusiasm went down, down, down - $100 at a time, till finally when the plate came round I stole 10 cents out of it. [Prolonged laughter.] So you see a neglect like that may lead to crime.

comment by NancyLebovitz · 2012-09-03T02:24:49.102Z · LW(p) · GW(p)

It might be worth taking a look at Karen Horney's work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn't good enough, and will invent inhuman standards for themselves.

I'm working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-03T05:27:27.834Z · LW(p) · GW(p)

I wasn't abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?

Of course that's not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn't, unless something went horribly wrong in her childhood.

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2012-09-03T11:42:35.079Z · LW(p) · GW(p)

Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did.

Also-- if I reread I'll check this-- I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they're related.

comment by NancyLebovitz · 2012-09-03T06:00:47.060Z · LW(p) · GW(p)

I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that's bad for him.

Horney died in 1952, so she might not have had access to rationalists in your sense of the word.

When I said it might be worth taking a look at Horney's work, I really did mean I thought it might be worth exploring, not that I'm very sure it applies. It seems to be of some use for me.

Replies from: prase
comment by prase · 2012-09-03T20:13:16.721Z · LW(p) · GW(p)

To be clear, I don't have problems with applying high standards to myself, unless not wishing to apply such standards qualifies as a problem. However I am far more willing to consider myself an altruist (and perhaps behave accordingly) when other people don't constantly remind me that it's my moral obligation.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-03T20:24:41.380Z · LW(p) · GW(p)

Thanks for the explanation, and my apologies for jumping to conclusions.

I've been wondering why cheerleading sometimes damages motivation-- there's certainly a big risk of it damaging mine. The other half would be why cheerleading sometimes works, and what the differences are between when it works and when it doesn't.

At least for me, I tend to interpret cheerleading as "Let me take you over for my purposes. This project probably isn't worth it for you, that's why I'm pushing you into it instead of letting you see its value for yourself." with a side order of "You're too stupid to know what's valuable, that's why you have to be pushed."

I'm not sure what cheerleading feels like to people who like it.

Replies from: prase
comment by prase · 2012-09-03T20:52:00.913Z · LW(p) · GW(p)

No need to apologise.

The feeling of being forced to pursue someone else's goals is certainly part of it. But even if the goals align, being pushed usually means that one's good deeds aren't going to be fully appreciated by others, which too is a great demotivator.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-03T20:57:11.099Z · LW(p) · GW(p)

I think the feeling that one's good deeds will be unappreciated is especially a risk for altruism.

comment by Desrtopa · 2012-09-04T17:46:18.077Z · LW(p) · GW(p)

Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal.

I'm not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don't need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility.

comment by Dolores1984 · 2012-09-01T21:54:11.168Z · LW(p) · GW(p)

An ethical approach to life does not forbid having fun or enjoying food and wine

I'm not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries.

Replies from: MixedNuts, IlyaShpitser
comment by MixedNuts · 2012-09-01T22:08:42.315Z · LW(p) · GW(p)

Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy.

Replies from: katydee, Dolores1984
comment by katydee · 2012-09-02T23:39:57.033Z · LW(p) · GW(p)

And the great philosopher Diogenes would disagree with her.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T05:53:39.498Z · LW(p) · GW(p)

So, how many lives did he save again?

Clever guy, but I'm not sure if you want to follow his example.

comment by Dolores1984 · 2012-09-02T06:00:50.560Z · LW(p) · GW(p)

That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people. If we're holding our ideas to scrutiny, I think the idea that the 'Sunday Catholic' school of ethics is consistent could take a long, hard look.

Replies from: army1987, OnTheOtherHandle, Desrtopa, Raemon, faul_sname, None, juliawise, Kaj_Sotala
comment by A1987dM (army1987) · 2012-09-02T22:28:28.987Z · LW(p) · GW(p)

We're talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There's no way she doesn't actually believe what she says and still does that.

Replies from: prase
comment by prase · 2012-09-03T00:09:24.155Z · LW(p) · GW(p)

That she gives more than most others doesn't imply that her belief that giving even more is practically impossible isn't hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies.

Replies from: Eliezer_Yudkowsky, army1987, juliawise
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-03T05:31:51.063Z · LW(p) · GW(p)

Yeah, but there's also a certain plausibility to the heuristic which says that you don't get to second-guess her knowledge of what works for charitable giving until you're - not giving more - but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that "hypocrisy" would cause her mind to collapse, and do you really want to second-guess her on that if she's already doing more than an order of magnitude better than what your own mental setup permits?

Replies from: prase, army1987
comment by prase · 2012-09-03T18:41:41.586Z · LW(p) · GW(p)

I am actually inclined to believe Wise's hypothesis (call it H) that being overly selfless can hamper one's ability to help others. I was only objecting to army1987's implicit argument that because she (Wise) clearly believes H, Dolores1984's suspicion of H being a self-serving untrue argument is unwarranted.

comment by A1987dM (army1987) · 2012-09-03T08:10:17.440Z · LW(p) · GW(p)

There's an Italian proverb “Everybody is a faggot with other people's asses”, meaning more-or-less ‘everyone is an idealist when talking about issues that don't directly affect them/situations they have never experienced personally”.

comment by A1987dM (army1987) · 2012-09-03T08:01:35.103Z · LW(p) · GW(p)

You're using hypocritical in a weird way -- I'd only normally use it to mean ‘lying’, not ‘mistaken’.

Replies from: prase
comment by prase · 2012-09-03T18:26:51.830Z · LW(p) · GW(p)

I use "hypocrisy" to denote all instances of people violating their own declared moral standards, especially when they insist they aren't doing it after receiving feedback (if they can realise what they did after being told, only then I'd prefer to call it a 'mistake'). The reason why I don't restrict the word to deliberate lying is that I think deliberate lying of this sort is extremely rare; self-serving biases are effective in securing that.

Replies from: Exiles
comment by Exiles · 2012-09-04T07:58:16.771Z · LW(p) · GW(p)

especially when they insist they aren't doing it after receiving feedback

You underestimate force of habit, prase.

Replies from: prase
comment by prase · 2012-09-04T16:14:48.687Z · LW(p) · GW(p)

Can you explain?

comment by juliawise · 2012-09-11T15:26:11.571Z · LW(p) · GW(p)

I don't believe it's practically impossible to give more than I do. I could push myself farther than I do. I don't perfectly live up to my own ideals. Given that I'm a human, I doubt any of you find that surprising.

comment by OnTheOtherHandle · 2012-09-04T05:02:08.266Z · LW(p) · GW(p)

This is why I think it's not too terribly useful to give labels like "good person" or "bad person," especially if our standard for being a "bad person" is "someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values." In the end, I think labeling people is just a useful approximation to labeling consequences of actions.

Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy?

The question here is not whether any individual person could be giving more; the answer is virtually always "yes." The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there's great value in not generating an ugh-field around charity.

comment by Desrtopa · 2012-09-02T19:45:15.822Z · LW(p) · GW(p)

Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt.

comment by Raemon · 2012-09-10T18:32:57.349Z · LW(p) · GW(p)

I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards "try to give 10% of your income", because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-10T20:34:43.569Z · LW(p) · GW(p)

An interesting implication, if this generalizes: "Don't advocate the moral beliefs you think people should follow. Advocate the moral beliefs which hearing you advocate them would actually cause other people to behave better."

Replies from: Matt_Caulfield
comment by Matt_Caulfield · 2012-09-10T23:57:02.877Z · LW(p) · GW(p)

Just a sidenote: If you are the kind of person who is often worried about letting people down, entertaining the suspicion that most people follow this strategy already is a fast, efficient way to drive yourself completely insane.

"You're doing fine."

"Oh, I know this game. I'm actually failing massively, but you thought, well, this is the best he can do, so I might as well make him think he succeeded. DON'T LIE TO ME! AAAAH..."

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-03-09T12:53:46.495Z · LW(p) · GW(p)

Sometimes I wonder how much of LW is "nerds" rediscovering on their own how neuro-typical communication works.

I don't mean to say I am not a "nerd" in this sense :).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-09T14:49:49.241Z · LW(p) · GW(p)

Sometimes I wonder how much of LW is "nerds" rediscovering on their own how neuro-typical communication works.

The result bears about as much resemblance to real people as an FRP character sheet and rulebook.

comment by faul_sname · 2012-09-03T00:43:57.299Z · LW(p) · GW(p)

That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people.

Is it justified? Pretend we care nothing for good and bad people. Do these "bad people" do more good than "good people"?

comment by [deleted] · 2012-09-03T00:52:21.924Z · LW(p) · GW(p)

Do you live a life of extraordinary, desperate asceticism? If not, why not? If so, are you happy?

comment by juliawise · 2012-09-10T18:18:38.473Z · LW(p) · GW(p)

Well, Jeff and I give about a third of our income, so I'd say we're not Sunday Catholics but Sunday-Monday-and-part-of-Tuesday Catholics.

Seriously, though, I advocate that people do what will result in the most good, which is usually not to try for perfection. Dolores1984, you've said before that rather than fail at a high standard of helping you'd rather not help at all. (Correct me if that summary is wrong). I'd rather see people set a standard in keeping with their level of motivation, if that's what will mean they take a stab at helping.

Replies from: Dolores1984
comment by Dolores1984 · 2012-09-10T20:17:52.641Z · LW(p) · GW(p)

That's fair. In my case, I think I've decided that, so long as we're all going to be bad people, and value some human life much more than others, I'd rather care a lot about a few people than a little about a lot of people, and calibrate my charitable giving accordingly. It does not seem, in particular, less morally defensible, and it's certainly more along the lines of what humans were built to do. To that end, I adopted a shelter cat who was about to be put down. My views may change slightly, however, when I am less thoroughly and completely broke.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-11T15:14:36.956Z · LW(p) · GW(p)

so long as we're all going to be bad people

Fallacy of grey much? We're all going to be bad people, but some of us are going to be worse people than others.

comment by Kaj_Sotala · 2012-09-06T08:12:05.139Z · LW(p) · GW(p)

The coalition of modules in your mind that believes in ascetism being the only acceptable solution is most likely vastly outnumbered by the hedonistic modules. (Most people for which this wasn't the case were most likely filtered out of the gene pool.) As with politics, if you refuse to make compromises and insist on pushing your agenda while outnumbered, you will lose, or at best (worst?) create a deadlock in which nobody is happy. If you're not so absolute, you're more likely to achieve at least some of your aims.

Or, as Carl Shulman put it:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

comment by IlyaShpitser · 2013-03-09T13:04:14.806Z · LW(p) · GW(p)

If I may be so bold as to summarize this thread:

  1. Whatever utility calculus you follow, it is a mathematical model.

  2. "All models are false."

  3. In particular, what's going wrong here is your model is treating you, the agent, as atomic. In reality, as Kaj Sotala described very well below, you are not an atomic agent, you have an internal architecture, and this architecture has very important ramifications for how you should think about utilities.

If I may make an analogy from the field of AI. In the old days, AI was concerned about something called "discrete search," which is just a brute force way to look for an optimum in a state space, where each state is essentially an atomic point. Alpha-beta pruning search Deep Blue uses to play chess is an example of discrete search. At some point it was realized that for many problems atomic point-like states resulted in a combinatorial explosion, and in addition states had salient features describable by, say, logical languages. As this realization was implemented, you no longer had a state-as-a-point, but state-as-a-collection-of-logical-statements. And the field of planning was born. Planning has some similarities to discrete search, but because we "opened up" the states into a full blown logical description, the character of the problem is quite different.

I think we need to "open up the agent."

comment by A1987dM (army1987) · 2012-09-11T10:07:40.315Z · LW(p) · GW(p)

To use an analogy, if you attend a rock concert and take a box to stand on then you will get a better view. If others do the same, you will be in exactly the same position as before. Worse, even, as it may be easier to loose your balance and come crashing down in a heap (and, perhaps, bringing others with you).

-- Iain McKay et al., An Anarchist FAQ, section C.7.3

Replies from: alex_zag_al
comment by alex_zag_al · 2012-09-11T13:13:28.359Z · LW(p) · GW(p)

Tropical rain forests, bizarrely, are the products of prisoner's dilemmas. The trees that grow in them spend the great majority of their energy growing upwards towards the sky, rather than reproducing. If they could come to a pact with their competitors to outlaw all tree trunks and respect a maximum tree height of ten feet, every tree would be better off. But they cannot.

Matt Ridley, in The Origins of Virtue

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-12T08:16:54.390Z · LW(p) · GW(p)

"Better off" according to whose utility function?

Replies from: alex_zag_al, MixedNuts
comment by alex_zag_al · 2012-09-12T20:21:44.433Z · LW(p) · GW(p)

yeah, it's not obvious from this quote, but having read the book, I know what he means. The utility function of the tree is the sum, over all individuals, of the fraction of genes that each other individual has in common with it. He constantly talks as if plants, chromosomes, insects etc. desire to maximize this number.

I think it works, because when an organism is in its environment of evolutionary adaptation, finding that a behavior makes this number bigger than alternative behaviors would explains why the organism carries out that behavior. And if the organism does not carry out the behavior, then you need some explanation for why not. Right?

Replies from: thomblake
comment by thomblake · 2012-09-12T20:27:23.312Z · LW(p) · GW(p)

when an organism is in its environment of evolutionary adaptation

That's a really important caveat. Adaptation-Executers, not Fitness-Maximizers.

comment by MixedNuts · 2012-09-12T14:29:48.004Z · LW(p) · GW(p)

They'd expend less energy per surviving descendent produced.

comment by [deleted] · 2012-09-04T08:39:45.176Z · LW(p) · GW(p)

Neither side of the road is inherently superior to the other, so we should all choose for ourselves on which side to drive. #enlightenment

--Kate Evans on Twitter

Replies from: roystgnr, DaFranker, Grognor
comment by roystgnr · 2012-09-04T22:39:00.557Z · LW(p) · GW(p)

Don't we all choose for ourselves on which side to drive? There's usually nobody else ready to grab the wheel away from you...

Replies from: Document
comment by Document · 2012-09-09T18:10:48.968Z · LW(p) · GW(p)

There are police ready to pull you over, for certain values of "ready". (Not commenting on whether that relates to Evans' point.)

comment by DaFranker · 2012-09-04T14:50:17.748Z · LW(p) · GW(p)

Have successfully quoted this to counter a relativist-truth argument that was aimed towards supporting "freedom of faith" even in hypothetical scenarios where the majority of actors would end up promoting and following harmful faiths.

While counterintuitive to me, it was apparently a necessary step before the other party could even comprehend the fallacy of gray that was being committed.

comment by Grognor · 2012-09-12T05:37:03.923Z · LW(p) · GW(p)

You may find it felicitous to link directly to the tweet.

Replies from: None
comment by [deleted] · 2012-09-12T07:08:16.781Z · LW(p) · GW(p)

You responded to the wrong post or gave the wrong link. I do see your point, fixed both quotes.

comment by taelor · 2012-09-14T07:16:44.651Z · LW(p) · GW(p)

Oh, right, Senjōgahara. I've got a great story to tell you. It's about that man who tried to rape you way back when. He was hit by a car and died in a place with no connection to you, in an event with no connection to you. Without any drama at all. [...] That's the lesson for you here: You shouldn't expect your life to be like the theater.

-- Kaiki Deishū, Episode 7 of Nisemonogatari.

comment by cata · 2012-09-03T23:10:45.866Z · LW(p) · GW(p)

Does the order of the two terminal conditions matter? / Think about it.

Does the order of the two terminal conditions matter? / Try it out!

Does the order of the two previous answers matter? / Yes. Think first, then try.

  • Friedman and Felleisen, The Little Schemer
Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T23:19:06.852Z · LW(p) · GW(p)

Could you unpack that for me?

Replies from: cata
comment by cata · 2012-09-04T01:02:46.583Z · LW(p) · GW(p)

Sure. The book is a sort of resource for learning the programming language Scheme, where the authors will present an illustrative piece of code and discuss different aspects of its behavior in the form of a question-and-answer dialogue with the reader.

In this case, the authors are discussing how to perform numerical comparisons using only a simple set of basic procedures, and they've come up with a method that has a subtle error. The lines above encourage the reader to figure out if and why it's an error.

With computers, it's really easy to just have a half-baked idea, twiddle some bits, and watch things change, but sometimes the surface appearance of a change is not the whole story. Remembering to "think first, then try" helps me maintain the right discipline for really understanding what's going on in complex systems. Thinking first about my mental model of a situation prompts questions like this:

  • Does my model explain the whole thing?
  • What would I expect to see if my model is accurate? Can I verify that I see those things?
  • Does my model make useful predictions about future behavior? Can I test that now, or make sure that when it happens, I gather the data I need to confirm it?

It's harder psychologically (and maybe too late) to ask those questions in retrospect if you try first, and then think, and if you skip asking them, then you'll suffer later.

Replies from: RomanDavis, Antisuji
comment by RomanDavis · 2012-09-04T01:45:30.331Z · LW(p) · GW(p)

You know, I've seen a lot on here about how programming relates to thinking relates to rationality. I wonder if it'd be worth trying and where/how I might get started.

Replies from: cata, RobinZ
comment by cata · 2012-09-04T02:01:59.130Z · LW(p) · GW(p)

It's certainly at least worth trying, since among things to learn it may be both unusually instructive and unusually useful. Here's the big list of LW recommendations.

Replies from: RomanDavis
comment by RomanDavis · 2012-09-04T02:10:30.361Z · LW(p) · GW(p)

Khan Academy has a programming course? I might try it.

Mostly, I want the easiest, most handholdy experience possible. Baby talk if necessary. Every experience informs me that programming is hard.

Replies from: cata, CCC
comment by cata · 2012-09-04T03:18:24.879Z · LW(p) · GW(p)

This is the easiest, most handholdy experience possible: http://learnpythonthehardway.org/book/

A coworker of mine who didn't know any programming, and who probably isn't smarter than you, enjoyed working through it and has learned a lot.

Programming is hard, but a lot of good things are hard.

comment by CCC · 2012-09-04T08:22:00.537Z · LW(p) · GW(p)

The first trick is to be able to describe how to solve a problem; and then break that description down into the smallest possible units and write it out such that there's absolutely no possibility of a misunderstanding, no matter what conditions occur.

Once you've got that done, it's fairly easy to learn how to translate it into a programming language.

Replies from: DaFranker
comment by DaFranker · 2012-09-04T15:03:49.206Z · LW(p) · GW(p)

Which is also why it helps, conversely, for reduction and rational thinking: The same skill that applies to formulating clear programs applies to formulating clear algorithms and concepts in any format, including thought.

comment by RobinZ · 2012-09-04T01:55:20.283Z · LW(p) · GW(p)

You know, I've seen a lot on here about how programming relates to thinking relates to rationality. I wonder if it'd be worth trying and where/how I might get started.

I would recommend trying, although I'm not really the right person to ask on starting points, if for no other reason than to test the hypothesis that learning programming aids the study of rationality.

comment by Antisuji · 2012-09-05T06:36:46.544Z · LW(p) · GW(p)

This is a great reminder, and is not always easy advice to follow, especially if your edit-compile-run cycle tightens or collapses completely. I think there's a tricky balance between understanding something explicitly, which you can only do by training your model by thinking carefully, and understanding something intuitively, which is made much easier with tools like the ones I linked.

Do you have a sense for which kind of understanding is more useful in practice? I suspect that when I design or debug software I am making heavier use of System 1 thinking than it seems, and I am often amazed at how detailed a model I have of the behavior of the code I am working with.

Replies from: cata
comment by cata · 2012-09-05T08:57:58.603Z · LW(p) · GW(p)

No, I don't, which I realized after spending half an hour trying to compose a reply to this. Sorry.

comment by lukeprog · 2012-09-22T13:28:25.749Z · LW(p) · GW(p)

The problem with any ideology is that it gives the answer before you look at the evidence.

Bill Clinton

comment by Alicorn · 2012-09-17T17:43:03.337Z · LW(p) · GW(p)

I've always thought of the SkiFree monster as a metaphor for the inevitability of death.

"SkiFree, huh? You know, you can press 'F' to go faster than the monster and escape."

-- xkcd 667

comment by katydee · 2012-09-13T05:07:40.622Z · LW(p) · GW(p)

There is nothing noble in being superior to your fellow man; true nobility is being superior to your former self.

Ernest Hemingway

Replies from: wedrifid
comment by wedrifid · 2012-09-13T08:37:48.772Z · LW(p) · GW(p)

There is nothing noble in being superior to your fellow man; true nobility is being superior to your former self.

Excellent. A shortcut to nobility. One day of being as despicable as I can practically manage and I'm all set.

Replies from: WingedViper, komponisto
comment by WingedViper · 2012-09-19T16:54:11.556Z · LW(p) · GW(p)

It does not state which (!) former self, so I would expect some sort of median or mean or summary of your former self and not just the last day. So I'm sorry but there is no shortcut ;-)

comment by komponisto · 2012-09-13T11:48:33.056Z · LW(p) · GW(p)

Indeed: if you were to be ignoble one day and normal the next, then your nobility would have gone up significantly.

comment by [deleted] · 2012-09-04T06:55:38.198Z · LW(p) · GW(p)

"If at first you don't succeed, switch to power tools." -- The Red Green Show

Replies from: DanArmak
comment by DanArmak · 2012-09-06T22:05:09.390Z · LW(p) · GW(p)

I can confirm that this works.

comment by katydee · 2012-09-02T18:51:54.040Z · LW(p) · GW(p)

When we were first drawn together as a society, it had pleased God to enlighten our minds so far as to see that some doctrines, which we once esteemed truths, were errors; and that others, which we had esteemed errors, were real truths. From time to time He has been pleased to afford us farther light, and our principles have been improving, and our errors diminishing.

Now we are not sure that we are arrived at the end of this progression, and at the perfection of spiritual or theological knowledge; and we fear that, if we should once print our confession of faith, we should feel ourselves as if bound and confin'd by it, and perhaps be unwilling to receive farther improvement, and our successors still more so, as conceiving what we their elders and founders had done, to be something sacred, never to be departed from.

Michael Welfare, quoted in The Autobiography of Benjamin Franklin

comment by Peter Wildeford (peter_hurford) · 2012-09-01T18:16:01.942Z · LW(p) · GW(p)

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Sagan

Replies from: buybuydandavis
comment by buybuydandavis · 2012-09-03T11:21:44.203Z · LW(p) · GW(p)

Rorschach: You see, Doctor, God didn't kill that little girl. Fate didn't butcher her and destiny didn't feed her to those dogs. If God saw what any of us did that night he didn't seem to mind. From then on I knew... God doesn't make the world this way. We do.

EDIT: Quote above is from the movie.

Replies from: Ezekiel
comment by Ezekiel · 2012-09-03T14:19:52.002Z · LW(p) · GW(p)

Verbatim from the comic:

It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It's us.
Only us.

I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.

("Study" in the artistic sense rather than the scientific.)

comment by lukeprog · 2012-09-09T00:46:27.167Z · LW(p) · GW(p)

If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before.

Douglas Hubbard, How to Measure Anything

Replies from: tgb
comment by tgb · 2012-09-10T22:55:34.402Z · LW(p) · GW(p)

This is the second time I've come across you mentioning Hubbard. Is the book good and, if so, what audience is it goo for?

Replies from: lukeprog
comment by lukeprog · 2012-09-10T23:55:19.427Z · LW(p) · GW(p)

How to Measure Anything is surprisingly good, so I added it here.

comment by [deleted] · 2012-09-04T08:40:20.892Z · LW(p) · GW(p)

Erode irreplaceable institutions related to morality and virtue because of their contingent associations with flawed human groups #lifehacks

--Kate Evans on Twitter

Replies from: simplicio
comment by simplicio · 2012-09-08T13:27:28.749Z · LW(p) · GW(p)

I was ready to applaud the wise contrarianism here, but I'm having trouble coming up with actual examples... marriage, maybe?

Replies from: arundelo
comment by arundelo · 2012-09-08T14:58:12.396Z · LW(p) · GW(p)

I don't know if this is what she was thinking of but church is what I thought of when I read it.

Replies from: simplicio
comment by simplicio · 2012-09-08T15:20:37.419Z · LW(p) · GW(p)

I thought of that too but dismissed it on the grounds that church is hardly "contingently associated" with religion.

But I think you're probably right that that's what she meant... and that being the case it is a pretty good point. I wish I belonged to something vaguely churchlike.

Replies from: Document
comment by Document · 2012-09-09T18:41:01.276Z · LW(p) · GW(p)

I disagree, but won't present any arguments to avoid derailing or getting involved in a debate.

comment by Matt_Caulfield · 2012-09-03T15:55:04.863Z · LW(p) · GW(p)

It may be of course that savages put food on a dead man because they think that a dead man can eat, or weapons with a dead man because they think a dead man can fight. But personally I do not believe that they think anything of the kind. I believe they put food or weapons on the dead for the same reason that we put flowers, because it is an exceedingly natural and obvious thing to do. We do not understand, it is true, the emotion that makes us think it is obvious and natural; but that is because, like all the important emotions of human existence it is essentially irrational.

  • G. K. Chesterton
Replies from: MixedNuts, simplicio
comment by MixedNuts · 2012-09-04T18:56:06.567Z · LW(p) · GW(p)

Chesterton doesn't understand the emotion because he doesn't know enough about psychology, not because emotions are deep sacred mysteries we must worship.

Replies from: RobinZ
comment by RobinZ · 2012-09-04T19:20:42.802Z · LW(p) · GW(p)

I read "irrational" as a genuflection in the direction of the is-ought problem more than anything else.

Replies from: MixedNuts
comment by MixedNuts · 2012-09-04T19:29:35.242Z · LW(p) · GW(p)

My beef isn't with "irrational", he meant "arational" anyway. It's with the idea that this property of emotions make our ignorance about them okay.

Replies from: RobinZ
comment by RobinZ · 2012-09-04T22:08:17.236Z · LW(p) · GW(p)

Ah - I missed that implication. Agreed.

comment by simplicio · 2012-09-04T03:38:43.055Z · LW(p) · GW(p)

Or better, arational.

Replies from: None
comment by [deleted] · 2012-09-04T10:38:33.177Z · LW(p) · GW(p)

That is an incredible term. Going to use it all the time.

comment by OnTheOtherHandle · 2012-09-19T05:24:59.466Z · LW(p) · GW(p)

Let us together seek, if you wish, the laws of society, the manner in which these laws are reached, the process by which we shall succeed in discovering them; but, for God's sake, after having demolished all the a priori dogmatisms, do not let us in our turn dream of indoctrinating the people...let us not - simply because we are at the head of a movement - make ourselves into the new leaders of intolerance, let us not pose as the apostles of a new religion, even if it be the religion of logic, the religion of reason.

Pierre Proudhon, to Karl Marx

comment by Richard_Kennaway · 2012-09-16T18:10:49.794Z · LW(p) · GW(p)

When a precise, narrowly focused technical idea becomes metaphor and sprawls globally, its credibility must be earned afresh locally by means of specific evidence demonstrating the relevance and explanatory power of the idea in its new application.

Edward Tufte, "Beautiful Evidence"

Replies from: simplicio
comment by simplicio · 2012-09-17T02:46:46.682Z · LW(p) · GW(p)
  • Evolution
  • Relativity
  • Foundational assumptions of standard economics

...what else?

Replies from: Richard_Kennaway, benelliott
comment by Richard_Kennaway · 2012-09-20T11:08:00.820Z · LW(p) · GW(p)
  • Bayes' theorem
  • Status
  • Computation
  • Utility
  • Optimisation
comment by benelliott · 2012-09-18T19:13:33.917Z · LW(p) · GW(p)

Quantum physics

comment by lukeprog · 2012-09-09T23:47:06.063Z · LW(p) · GW(p)

...the 2008 financial crisis showed that some [mathematical finance] models were flawed. But those flaws were based on flawed assumptions about the distribution of price changes... Nassim Taleb, a popular author and critic of the financial industry, points out many such flaws but does not include the use of Monte Carlo simulations among them. He himself is a strong proponent of these simulations. Monte Carlo simulations are simply the way we do the math with uncertain quantities. Abandoning Monte Carlos because of the failures of the financial markets makes as much sense as giving up on addition and subtraction because of the failure of accounting at Enron or AIG’s overexposure in credit default swaps.

Douglas Hubbard, How to Measure Anything

comment by Eugine_Nier · 2012-09-27T00:26:39.765Z · LW(p) · GW(p)

As far as I know, Robespierre, Lenin, Stalin, Mao, and Pol Pot were indeed unusually incorruptible, and I do hate them for this trait.

Why? Because when your goal is mass murder, corruption saves lives. Corruption leads you to take the easy way out, to compromise, to go along to get along. Corruption isn't a poison that makes everything worse. It's a diluting agent like water. Corruption makes good policies less good, and evil policies less evil.

I've read thousands of pages about Hitler. I can't recall the slightest hint of "corruption" on his record. Like Robespierre, Lenin, Stalin, Mao, and Pol Pot, Hitler was a sincerely murderous fanatic. The same goes for many of history's leading villains - see Eric Hoffer's classic The True Believer. Sincerity is so overrated. If only these self-righteous monsters had been corrupt hypocrites, millions of their victims could have bargained and bribed their way out of hell.

-- Bryan Caplan

Replies from: MixedNuts, shminux
comment by MixedNuts · 2012-09-28T09:17:49.262Z · LW(p) · GW(p)

Hitler was at least a hypocrite - he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet. The kind of corruption Caplan is pointing at is a willingness to compromise with anyone who makes offers, not any kind of ignoring your principles. And Nazis were definitely against that - see the Duke in Jud Süß.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-09-28T10:01:43.017Z · LW(p) · GW(p)

he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet

?

Please provide evidence for this bizarre claim?

Replies from: MixedNuts, TimS, TheOtherDave
comment by MixedNuts · 2012-09-28T15:22:10.637Z · LW(p) · GW(p)

Spared Jews:

  • Ernst Hess, his unit commander in WWI, protected until 1942 then sent to a labor (not extermination) camp
  • Eduard Bloch, his and his mother's doctor, allowed to emigrate out of Austria with more money than normally allowed
  • I've heard things about fellow artists (a commenter on Caplan's post mentions an art gallery owner) but I don't have a source.
  • There are claims about his cook, Marlene(?) Kunde, but he seems to have fired her when Himmler complained. Anyone has Musmanno's book or some other non-Stormfronty source?

Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him.

Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality.

comment by TimS · 2012-09-28T14:13:12.257Z · LW(p) · GW(p)

Wittgenstein paid a huge bribe to allow his family to leave Germany. Somewhere I read that this particular agreement was approve personally be Hitler (or someone very senior in the hierarchy).

That doesn't contradict the general point that Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape.

Replies from: DanArmak
comment by DanArmak · 2012-09-29T19:26:24.472Z · LW(p) · GW(p)

Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape.

This may have happened some of the time, but everything I read suggests it was the exception and not the rule.

The reason Jews did not emigrate out of Germany during the 30s was that Germany had a big foreign balance problem, and managed tight government control over allocation of foreign currency. Jews (and Germans) could not convert their Reichsmarks to any other currency, either in Germany or out of it, and so they were less willing to leave. And no other country was willing to take them in in large numbers (since they would be poor refugees). This continued during the war in the West European countries conquered by Germany. (Ref: Wages of Destruction, Adam Tooze)

Later, all Jewish property was expropriated and the Jews sent to camps, so there was no more room for bribes - the Jews had nothing to offer since the Nazis took what they wanted by force.

comment by TheOtherDave · 2012-09-28T13:26:55.950Z · LW(p) · GW(p)

The last bit is most famously true of Rohm, though of course there's a dozen different things going on there.

comment by shminux · 2012-09-27T00:47:10.572Z · LW(p) · GW(p)

If only these self-righteous monsters had been corrupt hypocrites, millions of their victims could have bargained and bribed their way out of hell.

That sums it up.

Replies from: Nornagest
comment by Nornagest · 2012-09-28T10:25:38.369Z · LW(p) · GW(p)

Bargains and bribes seem of questionable use when a power is willing and able to kill you and seize all your assets anyway. I suppose there's the odd Swiss bank account or successful smuggling case to deal with, or people willing to destroy their possessions rather than let them fall into the hands of a murderous authority, but I'd be surprised if any of these weren't fairly small minorities in the face of the total. We're certainly not talking millions.

Corruption at lower levels could have reduced the death toll of many famous genocides (in fact, I'd imagine it did), but at the level of Hitler or Pol Pot I can only see it helping if the bribes or bargains being offered are quite large and originate outside of the regions where the repression's taking place. Much like the present situation with North Korea, come to think of it.

comment by [deleted] · 2012-09-15T17:58:02.258Z · LW(p) · GW(p)
The Perfect Way is only difficult
           for those who pick and choose;

Do not like, do not dislike;
               all will then be clear.

Make a hairbreadth difference,
              and Heaven and Earth are set apart;

if you want the truth to stand clear before you,
              never be for or against.

The struggle between "for" and "against"
              is the mind's worst disease.

-- Jianzhi Sengcan

Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition.

Replies from: Emile, TimS, J_Taylor, MixedNuts, DanArmak
comment by Emile · 2012-09-16T19:29:58.601Z · LW(p) · GW(p)

For the curious, it's the opening of 信心铭 (Xinxin Ming), whose authorship is disputed (probably not the zen patriarch Jiangzhi Sengcan). In Chinese, that part goes:

至道无难,惟嫌拣择。
但莫憎爱,洞然明白。
毫厘有差,天地悬隔。
欲得现前,莫存顺逆。
违顺相争。是为心病。

(The Wikipedia article lists a few alternate translations of the first verses, with different meanings)

comment by TimS · 2012-09-16T02:31:30.658Z · LW(p) · GW(p)

Do I understand you to be saying that you avoid "the struggle between 'for' and 'against'" to an unusual degree compared to the average person? Compared to the average LWer?

Replies from: None, Vaniver
comment by [deleted] · 2012-09-16T06:30:53.170Z · LW(p) · GW(p)

No. I'm claiming this helps me avoid it more than I otherwise could. Much for the same reason I try as hard as I can to maintain an apolitical identity. From my personal experience (mere anecdotal evidence) both improve my thinking.

Replies from: TimS
comment by TimS · 2012-09-16T16:04:26.735Z · LW(p) · GW(p)

Respectfully, your success at being apolitical is poor.

Further, I disagree with the quote to extent that it implies that taking strong positions is never appropriate. So I'm not sure that your goal of being "apolitical" is a good goal.

Replies from: None, simplicio, None
comment by [deleted] · 2012-09-16T17:49:46.624Z · LW(p) · GW(p)

Since we've already had exchanges on how I use "being apolitical", could you please clarify your feedback. Are you saying I display motivated cognition when it comes to politically charged subjects or behave tribally in discussions? Or are you just saying I adopt stances that are associated with certain political clusters on the site?

Also like I said it is something I struggle with.

Replies from: TimS
comment by TimS · 2012-09-16T20:37:13.178Z · LW(p) · GW(p)

My impression that you are unusually NOT-mindkilled compared to the average person with political positions/terminal values as far from the "mainstream" as your positions are.

You seem extremely sensitive to the facts and the nuances of opposing positions.

Replies from: None
comment by [deleted] · 2012-09-16T22:19:45.138Z · LW(p) · GW(p)

Now I feel embarrassed by such flattery. But if you think this an accurate description then perhaps me trying evicting "the struggle between 'for' and 'against'" from my brain might have something to do with it?

Respectfully, your success at being apolitical is poor.

I'm not sure I understand what you mean by this then. Let's taboo apolitical. To rephrase my original statement: "I try as hard as I can to maintain an identity, a self-conception that doesn't include political tribal affiliations."

Replies from: TimS
comment by TimS · 2012-09-16T22:49:23.564Z · LW(p) · GW(p)

You certainly seem to have succeeded in maintaining a self-identity that does not include a partisan political affiliation. I don't know whether you consider yourself Moldbuggian (a political identity) or simply think Moldbug's ideas are very interesting. (someday, we should hash out better what interests you in Moldbug).

My point when I've challenged your self-label "apolitical" is that you've sometime used the label to suggest that you don't have preferences about how society should be changed to better reflect how you think it should be organized. At the very least, there's been some ambiguity in your usage.

There's nothing wrong with having opinions and advocating for particular social changes. But sometimes you act like you aren't doing that, which I think is empirically false.

comment by simplicio · 2012-09-16T16:40:55.562Z · LW(p) · GW(p)

I disagree with the quote too. On the other hand, the idea of keeping one's identity small is not the same as being apolitical. It means you have opinions on political issues, but you keep them out of your self-definition so that (a) changing those opinions is relatively painless, (b) their correlations with other opinions don't influence you as much.

(Caricatured example of the latter: "I think public health care is a good idea. That's a liberal position, so I must be a liberal. What do I think about building more nuclear plants, you ask? It appears liberals are against nuclear power, so since I am a liberal I guess I am also against nuclear power.")

Replies from: TimS
comment by TimS · 2012-09-16T20:45:11.946Z · LW(p) · GW(p)

I agree with everything you just said - keeping one's identity small does not imply that one cannot be extremely active trying to create some kind of social/political change.

comment by [deleted] · 2012-09-16T16:37:23.116Z · LW(p) · GW(p)

I understand how a position can be correct or incorrect. I don't understand how a position can be strong or weak.

Replies from: Vaniver, TimS
comment by Vaniver · 2012-09-16T17:10:06.434Z · LW(p) · GW(p)

In a world of uncertainty, numbers between 0 and 1 find quite a bit of use.

Replies from: None
comment by [deleted] · 2012-09-16T17:26:08.754Z · LW(p) · GW(p)

I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.

Replies from: wedrifid, faul_sname, PhilosophyTutor
comment by wedrifid · 2012-09-16T17:52:40.106Z · LW(p) · GW(p)

I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.

It means that many kinds of observation that you could make will tend to cause you to update that probability less.

Replies from: Vaniver, army1987
comment by Vaniver · 2012-09-16T18:08:10.060Z · LW(p) · GW(p)

Concretely: Beta(1,2) and Beta(400,800) have the same mean.

Replies from: None
comment by [deleted] · 2012-09-16T18:41:45.455Z · LW(p) · GW(p)

I don't understand K to be arguing in favor of high-entropy priors, or T to be arguing in favor of low-entropy priors. My guess is that TimS would call a position a "strong position" if it was accompanied by some kind of political activism.

Replies from: Vaniver
comment by Vaniver · 2012-09-16T20:19:14.293Z · LW(p) · GW(p)

I think of a strong position as a low-entropy posterior, but rereading I am not confident that's what TimS meant, and I also don't see the connection to politics.

comment by A1987dM (army1987) · 2012-09-16T18:28:53.014Z · LW(p) · GW(p)

E.T. Jaynes' Probability Theory goes into some detail about that in the chapter about what he calls the A_p distribution.

comment by faul_sname · 2012-10-01T01:35:53.739Z · LW(p) · GW(p)

It means roughly that you give a high probability estimate that the thought process you used to come to that conclusion was sound.

comment by PhilosophyTutor · 2012-09-26T14:22:46.549Z · LW(p) · GW(p)

A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly.

You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-27T00:21:58.051Z · LW(p) · GW(p)

They might also differ in just how bad an idea they think it is.

comment by TimS · 2012-09-16T20:40:42.202Z · LW(p) · GW(p)

As I was using the term, "strong" is a measure of how far one's political positions/terminal values are from the "mainstream."

I'm very aware that distance from mainstream is not particularly good evidence of the correctness of one's political positions/terminal values.

comment by Vaniver · 2012-09-16T02:48:11.469Z · LW(p) · GW(p)

Do I understand you to be saying that you avoid "the struggle between 'for' and 'against'" to an unusual degree compared to the average person? Compared to the average LWer?

The claim looks narrower: repeating the poem makes Konkvistador more likely to avoid the struggle.

Replies from: TimS
comment by TimS · 2012-09-16T04:31:03.099Z · LW(p) · GW(p)

I like his contributions, but Konkvistador is not avoiding the struggle, when compared to the average LWer.

Replies from: None
comment by [deleted] · 2012-09-16T06:30:17.311Z · LW(p) · GW(p)

Sick people for some reason use up more medicine and may end up talking a lot about various kind of treatments.

comment by J_Taylor · 2012-09-15T18:24:11.954Z · LW(p) · GW(p)

Case in point:

I cannot - yet I must. How do you calculate that? At what point on the graph do "must" and "cannot" meet? Yet I must - but I cannot!

-- Ro-Man

comment by MixedNuts · 2012-09-15T18:13:14.928Z · LW(p) · GW(p)

I don't get it. Is this saying "Don't be prejudiced or push for any overarching principle; take each situation as new and unknown, and then you'll find easily the appropriate response to this situation", or is this the same old stoicist "Don't struggle trying to find food, choose to be indifferent to starvation" platitude?

Replies from: None
comment by [deleted] · 2012-09-15T18:32:52.338Z · LW(p) · GW(p)

Edited in a clarification. Though it will not help you since I have shown you the path you can not find it yourself. Sorry couldn't resist teasing or am I? :P

comment by DanArmak · 2012-09-15T18:04:12.580Z · LW(p) · GW(p)

Struggle not "against" paperclips; it is the mind's worst disease.

-- 21st c. AI Clippy

Replies from: None
comment by [deleted] · 2012-09-15T18:28:57.217Z · LW(p) · GW(p)

To clarify the quote contains several useful interpretations, but I think the key one is the virtue of keeping one's identity small.

comment by khafra · 2012-09-10T19:06:52.951Z · LW(p) · GW(p)

I particularly like the reminder that I'm physics. Makes me feel like a superhero. "Imbued with the properties of matter and energy, able to initiate activity in a purely deterministic universe, it's Physics Man!"

-- GoodDamon (this may skirt the edge of the rules, since it's a person reacting to a sequence post, but a person who's not a member of LW.)

Replies from: RobinZ
comment by RobinZ · 2012-09-11T01:49:57.442Z · LW(p) · GW(p)

...and, more importantly, not on LessWrong.com.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-03T05:19:56.253Z · LW(p) · GW(p)

Er... actually the genie is offering at most two rounds of feedback.

Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing.

Replies from: wedrifid, roland
comment by wedrifid · 2012-09-03T10:33:55.994Z · LW(p) · GW(p)

Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing.

Rather than a technical correction you seem just to be substituting a different meaning of 'feedback'. The author would certainly not agree that "You get 0 feedback from 1 wish".

Mind you I am wary of the the fundamental message of the quote. Feedback? One of the most obviously important purposes of getting feedback is to avoid catastrophic failure. Yet catastrophic failures are exactly the kind of thing that will prevent you from using the next wish. So this is "Just Feedback" that can Kill You Off For Real despite the miraculous intervention you have access to.

I'd say "What the genie is really offering is a wish and two chances to change your mind---assuming you happen to be still alive and capable of constructing corrective wishes".

Replies from: Morendil, Eliezer_Yudkowsky
comment by Morendil · 2012-09-03T11:00:10.491Z · LW(p) · GW(p)

"What the genie is really offering is a wish and two chances to change your mind---assuming you happen to be still alive and capable of constructing corrective wishes".

One well-known folk tale is based on precisely this interpretation. Probably more than one.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-03T20:51:35.753Z · LW(p) · GW(p)

0 feedback is exactly what you get from 1 wish. "Feedback" isn't just information, it's something that can control a system's future behavior - so unless you expect to find another genie bottle later, "Finding out how your wish worked" isn't the same as feedback at all.

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-09-04T04:14:16.061Z · LW(p) · GW(p)

so unless you expect to find another genie bottle later

...or unless genies granting wishes is actually part of the same system as the larger world, such that what I learn from the results of a wish can be applied (by me or some other observer) to better calibrate expectations from other actions in that system besides wishing-from-genies.

comment by wedrifid · 2012-09-04T03:02:45.237Z · LW(p) · GW(p)

0 feedback is exactly what you get from 1 wish. "Feedback" isn't just information, it's something that can control a system's future behavior - so unless you expect to find another genie bottle later, "Finding out how your wish worked" isn't the same as feedback at all.

I think it was clear that I inferred this as the new definition you were trying to substitute. I was very nearly as impressed as if you 'corrected' him by telling him that it isn't "feedback" if nobody is around to hear it, or perhaps told him that oxygen is a metal.

comment by roland · 2012-09-03T09:21:48.465Z · LW(p) · GW(p)

Why only 2 rounds of feedback if you have 3 wishes?

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T09:24:30.963Z · LW(p) · GW(p)

The third one's for keeps: you can't wish the consequences away.

Replies from: Xachariah, roland
comment by Xachariah · 2012-09-05T08:10:25.566Z · LW(p) · GW(p)

An elderly man was sitting alone on a dark path, right? He wasn't certain of which direction to go, and he'd forgotten both where he was traveling to and who he was. He'd sat down for a moment to rest his weary legs, and suddenly looked up to see an elderly woman before him.

She grinned toothlessly and with a cackle, spoke: 'Now your third wish. What will it be?'

'Third wish?' The man was baffled. 'How can it be a third wish if I haven't had a first and second wish?'

'You've had two wishes already,' the hag said, 'but your second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes.' She cackled at the poor berk. 'So it is that you have one wish left.'

'All right,' said the man, 'I don't believe this, but there's no harm in wishing. I wish to know who I am.'

'Funny,' said the old woman as she granted his wish and disappeared forever. 'That was your first wish.'

  • Morte's Tale to Yves (Planescape: Torment)
Replies from: Eliezer_Yudkowsky, Alicorn, siodine
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-05T22:34:06.951Z · LW(p) · GW(p)

I should like to point out that anyone in this situation who wishes what would've been their first wish if they had three wishes is a bloody idiot.

Replies from: Alicorn, None, Xachariah, shminux
comment by Alicorn · 2012-09-05T22:49:57.331Z · LW(p) · GW(p)

So: A genie pops up and says, "You have one wish left."

What do you wish for? Because presumably the giftwrapped FAI didn't work so great.

Replies from: CCC, siodine, Cyan, JulianMorrison
comment by CCC · 2012-09-18T07:46:25.638Z · LW(p) · GW(p)

"I wish to know what went wrong with my first wish."

This way, I at least end up with improved knowledge of what to avoid in the future.

Alternatively, "I wish for a magical map, which shows me, in real time, the location of every trapped genie and other potential source of wishes in the world." Depending on how many there are, I can potentially get a lot more feedback that way.

comment by siodine · 2012-09-05T23:56:43.117Z · LW(p) · GW(p)

I bet he'd wish "to erase all uFAI from existence before they're even born. Every uFAI in every universe, from the past and the future, with my own hands."

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-06T05:18:28.246Z · LW(p) · GW(p)

Nobody believes in the future.

Nobody accepts the future.

Then -

Replies from: MugaSofer
comment by MugaSofer · 2012-09-17T13:08:29.627Z · LW(p) · GW(p)

Perhaps I'm simply being an idiot, but ... huh?

Replies from: ArisKatsaris, TimS
comment by ArisKatsaris · 2012-09-17T13:39:28.310Z · LW(p) · GW(p)

It's a reference to an anime; you're not an idiot, just unlikely to get the reference and its appropriateness if you've not seen it yourself. PM me for the anime's name, if you are one of the people who either don't mind getting slightly spoiled, or are pretty sure that you would never get a chance to watch it on your own anyway.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-09-17T14:15:02.560Z · LW(p) · GW(p)

Could you just rot13 it? I'm curious too, I don't mind the spoiler, and whatever it is, I'd probably be more likely to watch it (even if only 2epsilon rather than epsilon) for knowing the relevance to LW.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-17T14:29:59.336Z · LW(p) · GW(p)

I'll just PM you the title too, and anyone else who wants me to likewise. Sorry, it just happens to be one of my favourite series, and all other things being equal I tend to prefer that people go into it as completely unspoilered as possible... Even knowing Eliezer's quote is a reference to it counts as a mild spoiler... explanation about how it is a reference would count as a major spoiler.

comment by TimS · 2012-09-17T13:13:20.306Z · LW(p) · GW(p)

I think that's Eliezer's prediction of the results of siodine's wish. Because wishes are NOT SAFE.

Replies from: MugaSofer
comment by MugaSofer · 2012-09-17T13:21:02.659Z · LW(p) · GW(p)

But what is he predicting, exactly?

comment by Cyan · 2012-09-17T14:14:05.994Z · LW(p) · GW(p)

"I wish for this wish to have no further effect beyond this utterance."

Replies from: wedrifid
comment by wedrifid · 2012-09-17T15:22:49.107Z · LW(p) · GW(p)

"I wish for this wish to have no further effect beyond this utterance."

Overwhelmingly probable dire consequence: You and everyone you love dies (over a period of 70 years) then, eventually, your entire species goes extinct. But hey, at least it's not "your fault".

Replies from: Cyan, JulianMorrison
comment by Cyan · 2012-09-17T19:36:40.857Z · LW(p) · GW(p)

But, alas, it's the wish that maximizes my expected utility -- for the malicious genie, anyway.

Replies from: wedrifid
comment by wedrifid · 2012-09-18T06:48:57.408Z · LW(p) · GW(p)

But, alas, it's the wish that maximizes my expected utility -- for the malicious genie, anyway.

Possibly. I don't off hand see what a malicious genie could do about that statement. However it does at least require it to honor a certain interpretation of your words as well as your philosophy about causality---in particular accept a certain idea of what the 'default' is relative to which 'no effect' can have meaning. There is enough flexibility in how to interpret your wish that I begin to suspect that conditional on the genie being sufficiently amiable and constrained that it gives you what you want in response to this wish there is likely to be possible to construct another wish that has no side effects beyond something that you can exploit as a fungible resource.

"No effect" is a whole heap more complicated and ambiguous than it looks!

comment by JulianMorrison · 2012-09-17T15:36:12.819Z · LW(p) · GW(p)

You can't use that tool to solve that problem.

Meanwhile, you have <= 70 years to solve it another way.

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2012-09-17T16:10:23.913Z · LW(p) · GW(p)

You can't use that tool to solve that problem.

You can't? So much the worse for your species. I quite possibly couldn't either. I'd probably at least think about it for five minutes first. I may even make a phone call first. And if I and my advisers conclude that for some bizarre reason "no further effect beyond this utterance" is better than any other simple wish that is an incremental improvement then I may end up settling for it. But I'm not going to pretend that I have found some sort of way to wash my hands of responsibility.

Meanwhile, you have <= 70 years to solve it another way.

Yes, that's better than catastrophic instant death of my species. And if I happen to estimate that my species has 90% chance of extinction within a couple of hundred years then I would be making the choice to accept a 90% chance of that doom. I haven't cleverly tricked my way out of a moral conundrum, I have made a gamble with the universe at stake, for better or for worse.

Relevant reading: The Parable of the Talents.

comment by MugaSofer · 2012-09-17T16:19:11.318Z · LW(p) · GW(p)

You can't use that tool to solve that problem.

"I wish for all humans to be immortal."

Sure, you need to start heavily promoting birth control, and there can be problems depending on how you define "immortal", but ...

It's a wish. You can wish for anything.

Unless, I suppose, that would have been your first wish. But the OP basically says your first wish was an FAI.

Replies from: mfb
comment by mfb · 2012-09-17T16:49:24.915Z · LW(p) · GW(p)

Immortal humans can go horribly wrong, unless "number of dying humans" is really what you want to minimize.

"Increase my utility as much as you can"?

Replies from: MugaSofer, chaosmosis
comment by MugaSofer · 2012-09-18T07:45:52.631Z · LW(p) · GW(p)

I said:

there can be problems depending on how you define "immortal"

You replied:

Immortal humans can go horribly wrong

I am well aware that this wish has major risks as worded. I was responding to the claim that "you can't use that tool to solve that problem."

Yes, obviously you wish for maximised utility. But that requires the genie to understand your utility.

comment by chaosmosis · 2012-09-17T16:53:09.694Z · LW(p) · GW(p)

"Increase my utility as much as you can"?

That would just cause them to pump chemicals in you head, I think. But it's definitely thinking in the right direction.

"number of dying humans" is really what you want to minimize.

Even with pseudo immortality, accidents happen, which means that the best way to minimize the number of dying humans is either to sterilize the entire species or to kill everyone. The goal shouldn't be to minimize death but to maximize life.

Overwrite my current utility function upon your previous motivational networks, leaving no motivational trace of their remains.

That actually seems like it'd work.

Replies from: wedrifid, mfb
comment by wedrifid · 2012-09-17T17:33:31.143Z · LW(p) · GW(p)

"Increase my utility as much as you can"?

That would just cause them to pump chemicals in you head, I think.

It wouldn't do that (except in some sense in which it is able to do arbitrary things you don't mean when given complicated or undefined requests).

comment by mfb · 2012-09-18T16:52:08.221Z · LW(p) · GW(p)

That would just cause them to pump chemicals in you head, I think. But it's definitely thinking in the right direction.

As long as I am not aware of that (or do not dislike it)... well, why not. However, MugaSofer is right, the genie has to understand the (future) utility function for that. But if it can alter the future without restrictions, it can change the utility function itself (maybe even to an unbounded one... :D)

comment by JulianMorrison · 2012-09-17T13:29:09.692Z · LW(p) · GW(p)

"Destroy yourself as near to immediately as possible, given that your method of self destruction causes no avoidable harm to anything larger than an ant."

Replies from: MugaSofer
comment by MugaSofer · 2012-09-17T13:35:34.682Z · LW(p) · GW(p)

They shrink the planet down to below our Schwarzschild radius, holding spacetime in place for just long enough to explain what you just did.

Alternately, they declare your wish is logically contradictory - genies are larger than ants.

Replies from: army1987, JulianMorrison
comment by A1987dM (army1987) · 2012-09-17T18:53:53.670Z · LW(p) · GW(p)

A sphere whose radius equals the Earth's Schwarzschild radius is larger than an ant.

comment by JulianMorrison · 2012-09-17T13:51:45.890Z · LW(p) · GW(p)

At the start of the scenario, you are already dead with probability approaching 1. Trying to knock the gun away can't hurt.

Replies from: MugaSofer
comment by MugaSofer · 2012-09-17T14:15:28.579Z · LW(p) · GW(p)

I was criticizing the wording of the "ant" qualifier, not the attempt to destroy the genie.

comment by [deleted] · 2012-09-05T22:51:20.359Z · LW(p) · GW(p)

That's not what's going on though. The traveller is assuming, reasonably, that his third wish is reversing the amnesiac effects of his second. He's not just starting fr om scratch.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-06T17:15:45.443Z · LW(p) · GW(p)

The traveller is assuming, reasonably, that his third wish is reversing the amnesiac effects of his second.

I don't think this follows from the text. The hag tells him "but second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes".

So she told him that he had been an amnesiac before any wishes were granted. Therefore he should have already guessed that his first wish was to know who he was -- and that this proved a bad idea, since his second wish was to reverse the first.

comment by Xachariah · 2012-09-05T23:36:21.327Z · LW(p) · GW(p)

It should be noted that night hags are sufficiently smart, powerful, and evil that your best case scenario upon meeting one is a quick and painful death.

comment by shminux · 2012-09-05T23:13:28.572Z · LW(p) · GW(p)

"anyone in this situation" who believes that an elderly woman before him can grant arbitrary wishes is a bloody idiot to begin with, so the bar is set low.

comment by Alicorn · 2012-09-05T17:26:12.272Z · LW(p) · GW(p)

But not everything is the way it was. Before he made any wishes, he had three.

She missed the chance to trap him in an infinite loop.

Replies from: Xachariah, MartinB
comment by Xachariah · 2012-09-05T20:14:10.140Z · LW(p) · GW(p)

But then the Hag would be trapped too.

She gets delight from tormenting mortals, but tormenting the same one, in the same way eternally, would probably be too close to wireheading for her.

Replies from: Alicorn
comment by Alicorn · 2012-09-05T20:16:47.625Z · LW(p) · GW(p)

Well, if she got bored, she could experiment with different ways to present his wishes to him at the "beginning" and see if she can get him to wish for something to else, or word it a bit differently. Since she seems to retain memories of the whole thing. (Which is again, things not being how they were, but.)

Replies from: Xachariah
comment by Xachariah · 2012-09-05T20:50:07.043Z · LW(p) · GW(p)

The psuedo-meta-textual answer is that Morte is lying to Yves while the main character overhears. Morte's making up the story just to mess around with him.

Background information is that gur znva punenpgre znqr n qrny jvgu n Unt (rivy cneg-gvzr travr), tnvavat vzzbegnyvgl naq nzarfvn. At the start of the story, the main character somehow broke out of an infinite loop of torture; he's stopped having Anterograde Amnesia, but still cannot remember much from before the cycle broke, and is on a quest to remember who he is. Morte is trying to dissuade the main character from finding out who he is, showing that things can be terrible even without an infinite loop.

comment by MartinB · 2012-09-05T18:45:28.032Z · LW(p) · GW(p)

Now that would be evil.

comment by siodine · 2012-09-05T22:46:56.924Z · LW(p) · GW(p)

If his first wished disappeared him forever, how did he ever get a second wish?

Apparently I suck at reading.

Replies from: Kindly
comment by Kindly · 2012-09-05T23:14:23.817Z · LW(p) · GW(p)

The old woman is the one disappearing forever, and only because the wishes ran out.

comment by roland · 2012-09-03T09:34:02.987Z · LW(p) · GW(p)

Right, but the consequences still qualify as feedback, no?

Replies from: RomanDavis
comment by RomanDavis · 2012-09-03T09:47:56.362Z · LW(p) · GW(p)

I always imagine the genie just goes back into his lamp to sleep or whatever, so in the hypothetical as it exists in my head, no. But I guess there could be a highly ambitious Genie looking for feedback after your last wish, so maybe.

I think in this case, Eliezer in talking about a genie like in Failed Utopia 4-2 who grants his wish, and then keeps working, ignoring feedback, because he just doesn't care, because caring isn't part of the wish.

The genie doesn't care about consequences, he just cares about the wishes. The second wish and third wish are the feedback.

Replies from: wedrifid
comment by wedrifid · 2012-09-03T10:03:20.376Z · LW(p) · GW(p)

I always imagine the genie just goes back into his lamp to sleep or whatever, so in the hypothetical as it exists in my head, no. But I guess there could be a highly ambitious Genie looking for feedback after your last wish, so maybe.

The feedback is for you, not what you happen to say to the genie.

comment by [deleted] · 2012-09-05T20:36:08.988Z · LW(p) · GW(p)

He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.

“What’s the good of Mercator’s North Poles and Equators, / Tropics, Zones, and Meridian Lines?" / So the Bellman would cry: and the crew would reply / “They are merely conventional signs!

“Other maps are such shapes, with their islands and capes! / But we’ve got our brave Captain to thank: / (So the crew would protest) “that he’s bought us the best— / A perfect and absolute blank!”

-Lewis Carroll, The Hunting of the snark

comment by [deleted] · 2012-09-01T16:17:29.834Z · LW(p) · GW(p)

.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-09-03T19:16:25.840Z · LW(p) · GW(p)

"Do you want 1111 1111 0000 0000 1111 1111 or 1111 1101 0000 0100 1111 1111? "

comment by Will_Newsome · 2012-09-01T10:17:14.534Z · LW(p) · GW(p)

Proceed only with the simplest terms, for all others are enemies and will confuse you.

— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.

Replies from: Ezekiel
comment by Ezekiel · 2012-09-03T15:17:59.533Z · LW(p) · GW(p)

Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing.

Replies from: Kawoomba, Will_Newsome
comment by Kawoomba · 2012-09-03T21:12:42.485Z · LW(p) · GW(p)

How would you characterise the in your opinion most prevalent use-cases?

Replies from: Ezekiel
comment by Ezekiel · 2012-09-03T21:21:33.007Z · LW(p) · GW(p)

"Easy to communicate to other humans", "easy to understand", or "having few parts".

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-09-03T23:31:22.192Z · LW(p) · GW(p)

"Having few parts" is what Occam's razor seems to be going for. We can speak specifically of "burdensome details," but I can't think of a one-word replacement for "simple" used in this sense.

It is a problem that people tend to use "simple" to mean "intuitive" or "easy to understand," and "complicated" to mean "counterintuitive." Based on the "official" definitions, quantum mechanics and mathematics are extremely simple while human emotions are exceedingly complex.

I think human beings have internalized a crude version of Occam's Razor that works for most normal social situations - the absurdity heuristic. We use it to see through elaborate, highly improbable excuses, for example. It just misfires when dealing with deeper physical reality because its focus is on minds and emotions. Hence, two different, nearly opposite meanings of the word "simple."

comment by Will_Newsome · 2012-09-03T21:08:24.549Z · LW(p) · GW(p)

Yeah, various smart people have made that point repeatedly, but Eliezer and Luke aren't listening and most people learn their words from Eliezer and Luke, so the community is still being sorta silly in that regard.

comment by Jayson_Virissimo · 2012-09-01T08:18:04.843Z · LW(p) · GW(p)

Conspiracy Theory, n. A theory about a conspiracy that you are not supposed to believe.

-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment

comment by J_Taylor · 2012-09-02T03:33:00.286Z · LW(p) · GW(p)

Major Greene this evening fell into some conversation with me about the Divinity and satisfaction of Jesus Christ. All the argument he advanced was, "that a mere creature or finite being could not make satisfaction to infinite justice for any crimes," and that "these things are very mysterious."

Thus mystery is made a convenient cover for absurdity.

  • John Adams
comment by A1987dM (army1987) · 2012-09-29T09:57:18.616Z · LW(p) · GW(p)

For a hundred years or so, mathematical statisticians have been in love with the fact that the probability distribution of the sum of a very large number of very small random deviations almost always converges to a normal distribution. ... This infatuation tended to focus interest away from the fact that, for real data, the normal distribution is often rather poorly realized, if it is realized at all. We are often taught, rather casually, that, on average, measurements will fall within ±σ of the true value 68% of the time, within ±2σ 95% of the time, and within ±3σ 99.7% of the time. Extending this, one would expect a measurement to be off by ±20σ only one time out of 2 × 10^88. We all know that “glitches” are much more likely than that!

-- W.H. Press et al., Numerical Recipes, Sec. 15.1

Replies from: ThirdOrderScientist
comment by ThirdOrderScientist · 2012-10-04T18:23:30.764Z · LW(p) · GW(p)

I don't think it's fair to blame the mathematical statisticians. Any mathematical statistician worth his / her salt knows that the Central Limit Theorem applies to the sample mean of a collection of independent and identically distributed random variables, not to the random variables themselves. This, and the fact that the t-statistic converges in distribution to the normal distribution as the sample size increases, is the reason we apply any of this normal theory at all.

Press's comment applies more to those who use the statistics blindly, without understanding the underlying theory. Which, admittedly, can be blamed on those same mathematical statisticians who are teaching this very deep theory to undergraduates in an intro statistics class with a lot of (necessary at that level) hand-waving. If the statistics user doesn't understand that a random variable is a measurable function from its sample space to the real line, then he/she is unlikely to appreciate the finer points of the Central Limit Theorem. But that's because mathematical statistics is hard (i.e. requires non-trivial amounts of work to really grasp), not because the mathematical statisticians have done a disservice to science.

comment by chaosmosis · 2012-09-09T00:34:36.530Z · LW(p) · GW(p)

"You're very smart. Smarter than I am, I hope. Though of course I have such incredible vanity that I can't really believe that anyone is actually smarter than I am. Which means that I'm all the more in need of good advice, since I can't actually conceive of needing any."

  • New Peter / Orson Scott Card, Children of the Mind
Replies from: Fyrius
comment by Fyrius · 2012-09-12T14:02:55.715Z · LW(p) · GW(p)

That's a modest thing to say for a vain person. It even sounds a bit like Moore's paradox - I need advice, but I don't believe I do.

(Not that I'm surprised. I've met ambivalent people like that and could probably count myself among them. Being aware that you habitually make a mistake is one thing, not making it any more is another. Or, if you have the discipline and motivation, one step and the next.)

Replies from: chaosmosis
comment by chaosmosis · 2012-09-13T16:09:06.090Z · LW(p) · GW(p)

I love New Peter. He's so interesting and twisted and bizarre.

comment by asparisi · 2012-09-07T02:55:20.809Z · LW(p) · GW(p)

.... he who works to understand the true causes of miracles and to understand Nature as a scholar, and not just to gape at them like a fool, is universally considered an impious heretic and denounced by those to whom the common people bow down as interpreters of Nature and the gods. For these people know that the dispelling of ignorance would entail the disappearance of that sense of awe which is the one and only support of their argument and the safeguard of their authority.

Baruch Spinoza Ethics

Replies from: Document, chaosmosis
comment by chaosmosis · 2012-09-07T03:21:07.006Z · LW(p) · GW(p)

That seems really odd to me, coming from Spinoza. I've never read him, but I thought that he was supposed to believe that God and Nature are the same thing. Does he do that, but then also investigate the nature of God through analyzing the way that Nature's laws work? How does he reconcile those two positions, I guess, is what I'm asking.

Can someone more familiar with his work than I help me out here?

Replies from: asparisi, Tyrrell_McAllister, hairyfigment
comment by asparisi · 2012-09-07T13:32:36.899Z · LW(p) · GW(p)

Spinoza held that God and Nature are the same thing.

His reasoning in a nutshell: an infinite being would need to have everything else as a part of it, so God has to just be the entire universe. It's not clear whether he really thought of God as a conscious agent, although he did think that there were "ideas" in God's mind (read: the Universe) and that these perfectly coincided with the existance of real objects in the world. As an example, he seems to reject the notion of God as picking from among possible worlds and "choosing" the best one, opting instead to say that God just is the actual world and that there is no difference between them.

So basically, studying nature for Spinoza is "knowing the mind of God."

He may also have been reacting to his excommunication, in fact, that's pretty likely. So the quote may have some sour grapes hidden inside of it.

Replies from: kilobug, chaosmosis
comment by kilobug · 2012-09-07T13:58:12.104Z · LW(p) · GW(p)

an infinite being would need to have everything else as a part of it

That doesn't hold in maths at least. N, Z, Q have the same size, but clearly Q isn't part of N. And there are as many rational numbers between 0 and 1 (or between 0 and 0.0000000000000000000001) than in Q as a whole, and yet, we can have an infinity of such different subsets. And it goes even worse with bigger sets.

It saddens me how much philosopher/theologists speak about "infinity" as if we had no set theory, no Peano arithmetic, no calculus, nothing. Intuition is usually wrong on "infinity".

Replies from: None
comment by [deleted] · 2012-09-07T14:37:57.626Z · LW(p) · GW(p)

Baruch Spinoza: 1632-1677 Isaac Newton: 1642-1727 Georg Cantor: 1845-1918 Richard Dedekind: 1831-1916 Guiseppe Peano: 1858-1932

Replies from: kilobug
comment by kilobug · 2012-09-07T14:44:50.872Z · LW(p) · GW(p)

Ok, I stand corrected on the dates, my mistake.

But still, didn't we already know that if you take a line, two distinct points A and B on it, there are an infinite number of points between A and B, and yet an infinite number of points outside [AB] ? Didn't we know that since the ancient greeks ?

Replies from: Tyrrell_McAllister, prase, None
comment by Tyrrell_McAllister · 2012-09-07T16:05:36.027Z · LW(p) · GW(p)

First, Spinoza is not using infinite in its modern mathematical sense. For him, "infinite" means "lacking limits" (see Definition 2, Part I of Ethics). Second, Spinoza distinguished between "absolutely infinite" and "infinite in its kind" (see the Explication following Definition 6, Part I).

Something is "infinite in its kind" if it is not limited by anything "of the same nature". For example, if we fix a Euclidean line L, then any line segment s within L is not "infinite in its kind" because there are line segments on either side that limit the extent of s. Even a ray r within L is not "infinite in its kind", because there is another ray in L from which r is excluded. Among the subsets of L, only the entire line is "infinite in its kind".

However, the entire line is not "absolutely infinite" because there are regions of the plane from which it is excluded (although the limits are not placed by lines).

comment by prase · 2012-09-07T22:45:40.271Z · LW(p) · GW(p)

I suspect "infinite" was supposed to mean "having infinite measure" rather than "having infinite number of points / subsets". In the latter sense every being, not only God, would be infinite.

comment by [deleted] · 2012-09-07T15:25:47.308Z · LW(p) · GW(p)

But still, didn't we already know that if you take a line, two distinct points A and B on it, there are an infinite number of points between A and B, and yet an infinite number of points outside [AB] ? Didn't we know that since the ancient greeks ?

That's a good point. Spinoza himself was a mathematician of no mean talent, so we should assume that he was aware of it as well. So the question is, does his argument avoid the mistake of taking 'infinite' to mean 'all encompassing?' without any argument to that effect? There are certainly questions to be raised about his argument, but I don't think this is one of his mistakes. If you don't want to take my word for it, here's the opening argument of the Ethics. Good luck, it's quite a slog.

The idea seems to be that the one substance has to be infinite and singular, because substances can't share attributes (see his definitions), and things which have nothing in common can't interact. Therefore substances can't cause each other to exist, and therefore if any exists, it must exist necessarily. If that's true, then existence is an attribute of a substance, and so no other substance could exist.

At any rate, the argument concerns an 'infinity' of attributes, and I think these are reasonably taken as countably infinite. Spinoza also defines infinite as 'not being limited by anything of the same kind', so by that definition he would say that with reference to the 'kind' 'number', the even numbers are finite, though they're infinite with reference to the 'kind' 'even number'.

comment by chaosmosis · 2012-09-07T14:00:24.570Z · LW(p) · GW(p)

Thanks.

My understanding was basically correct then. I just didn't understand why he'd go from that overall position to talk about why we need to investigate nature, when his whole approach really seemed more like laid back speculation than any form of science, or advocacy of science. The excommunication detail clarifies a lot though, as Spinoza's approach seems much more active and investigative when compared to the approach of the church.

Excellent, thanks again.

Replies from: asparisi
comment by asparisi · 2012-09-07T15:19:28.514Z · LW(p) · GW(p)

It's notable that Spinoza was a part of a Jewish community, rather than "a church." I've actually read the letter of his excommunication, and WOW. They really went all out. You're considered cursed just for reading what he wrote.

comment by Tyrrell_McAllister · 2012-09-07T13:01:18.519Z · LW(p) · GW(p)

Are you reacting to Spinoza's mention of "miracles" and "gods"?

Spinoza held that there are no miracles in the sense that everything without exception proceeds according to cause and effect. So he must mean something like "alleged miracles". As for the "gods", they are mentioned only as part of the belief system of the "common people".

comment by hairyfigment · 2012-09-07T04:17:05.518Z · LW(p) · GW(p)

What do you- he's a pantheist. Contemporaries called him an atheist because his position works exactly like atheism.

Replies from: fortyeridania
comment by fortyeridania · 2012-09-07T12:28:11.659Z · LW(p) · GW(p)

Downvoted for glibness and vagueness.

comment by ChristianKl · 2012-09-09T22:49:29.171Z · LW(p) · GW(p)

“The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.”

― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values

Replies from: Fyrius
comment by Fyrius · 2012-09-12T13:44:57.873Z · LW(p) · GW(p)

Well. Surely that's only part of the real purpose of the scientific method.

comment by Eugine_Nier · 2012-09-06T04:53:20.578Z · LW(p) · GW(p)

"Even in a minute instance, it is best to look first to the main tendencies of Nature. A particular flower may not be dead in early winter, but the flowers are dying; a particular pebble may never be wetted with the tide, but the tide is coming in."

G. K. Chesterton, "The Absence of Mr Glass"

Note: this was put in the mouth of the straw? atheist. It's still correct.

Replies from: Document
comment by Document · 2012-09-09T19:39:50.323Z · LW(p) · GW(p)

Note: this was put in the mouth of the straw? atheist.

Then Chesterton didn't say it.

Replies from: Vaniver
comment by Vaniver · 2012-09-09T20:09:09.725Z · LW(p) · GW(p)

It is typical to quote the author of fictional works for quotes from that fictional work, though I think it's somewhat more conventional here on LW to quote the character.

comment by juliawise · 2012-09-11T18:42:31.667Z · LW(p) · GW(p)

This is my home, the country where my heart is;

Here are my hopes, my dreams, my sacred shrine.

But other hearts in other lands are beating,

With hopes and dreams as true and high as mine.

My country’s skies are bluer than the ocean,

And sunlight beams on cloverleaf and pine.

But other lands have sunlight too and clover,

And skies are everywhere as blue as mine.

-Lloyd Stone

Replies from: Alicorn, V_V
comment by Alicorn · 2012-09-11T19:18:38.757Z · LW(p) · GW(p)

Duplicate, please delete the other.

comment by V_V · 2012-09-11T19:27:14.614Z · LW(p) · GW(p)

And skies are everywhere as blue as mine.

obviously he never visited the British Isles :D

comment by A1987dM (army1987) · 2012-09-11T10:09:31.640Z · LW(p) · GW(p)

If science proves some belief of Buddhism wrong, then Buddhism will have to change.

-- Tenzin Gyatso, 14th Dalai Lama

Replies from: Sophronius
comment by Sophronius · 2012-09-11T11:53:23.089Z · LW(p) · GW(p)

Not all that rational. Note that he requires scientific proof before he is willing to change his beliefs. The standards should be much lower than that.

Replies from: mfb, ChristianKl
comment by mfb · 2012-09-11T13:03:27.477Z · LW(p) · GW(p)

Wikiquote (http://en.wikiquote.org/wiki/Tenzin_Gyatso,_14th_Dalai_Lama) quotes this as

My confidence in venturing into science lies in my basic belief that as in science so in Buddhism, understanding the nature of reality is pursued by means of critical investigation: if scientific analysis were conclusively to demonstrate certain claims in Buddhism to be false, then we must accept the findings of science and abandon those claims.

I like that. It is a bit careful, but better than everything else I saw from other religions.

Replies from: shminux, Desrtopa, Larks
comment by shminux · 2012-09-11T19:26:19.681Z · LW(p) · GW(p)

Fortunately for him, Buddhism is cleverly designed to contain no scientifically falsifiable claims.

Well, maybe some of the 14 (really only 4) unanswerable questions can be answered some day. Cosmologists might prove that the universe is finite (current odds are slim), AI researchers might prove that self is both identical with the body and different from it by doing a successful upload. Would it make Buddhists abandon their faith? Not a chance.

Replies from: Vaniver
comment by Vaniver · 2012-09-11T19:56:30.174Z · LW(p) · GW(p)

Fortunately for him, Buddhism is cleverly designed to contain no scientifically falsifiable claims.

A Buddhist friend told me that at a class in a temple, the teacher mentioned four types of creation, of which one was spontaneous generation- like maggots spontaneously generating in meat. My friend interrupted to say that, no, that's not actually what happens, and that people did experiments to prove that it didn't happen. (My friend was too polite to mention that the experiments were 350 years old.) If I remember correctly, the teacher said something like "huh, okay," and went on with the lesson, leaving out any parts relevant to spontaneous generation.

Replies from: gwern
comment by gwern · 2012-09-11T20:01:23.286Z · LW(p) · GW(p)

That's an unfortunate example. The teacher should have amended his maggot example to 'the first living cell', then.

Replies from: Vaniver
comment by Vaniver · 2012-09-11T20:03:51.480Z · LW(p) · GW(p)

Traditional Spontaneous generation and modern abiogenesis are very different things, and comments that assume the first may be invalid if only the second is true.

Replies from: gwern
comment by gwern · 2012-09-11T20:24:53.426Z · LW(p) · GW(p)

They may have differences... but are they any that matter for that Buddhist typology of creation?

comment by Desrtopa · 2012-09-11T13:42:49.022Z · LW(p) · GW(p)

I'd say "careful" would be the other way around, not believing doctrines that make complex claims about reality until he has good evidence that those beliefs are true. Giving up hard-to-test beliefs only in the extreme case where scientific research conclusively proves them wrong is just a small concession in the direction of being epistemically responsible.

Replies from: mfb
comment by mfb · 2012-09-12T14:13:42.810Z · LW(p) · GW(p)

I meant "careful with respect to 'admitting that religious claims can be wrong' " - in other words, the same as you.

comment by Larks · 2012-09-11T18:54:13.947Z · LW(p) · GW(p)

Christians have given up virtually all of the bible on the basis of science. Whether or not they are still christians is another issue.

Replies from: Nornagest
comment by Nornagest · 2012-09-11T19:32:20.913Z · LW(p) · GW(p)

That's a bold claim. The Old Testament historical narrative post-Genesis is still controversial-to-accepted; anthropology hasn't for example turned up any evidence that I know of for Hebrew slavery under Pharaonic Egypt, but it's still presented as fact in many Christian circles that are not Biblical literalists. Deuteronomy and Leviticus have largely been rejected, but for cultural rather than scientific reasons. Psalms still seems to be taken in the spirit it was intended.

On the New Testament side of things, the Gospels still generally seem to be taken as, well, gospel, miracles and all. Acts is mostly accepted. The epistles are very short on supernaturalist claims, concerning themselves mostly with organization and ethics. Revelation's supernaturalist as hell but it's in prophecy form, and interpretations of it vary widely anyway.

Really, aside from scattered references like that odd pi == 3 thing, about the only parts of the Christian Bible that mainstream churches have widely dropped on scientific grounds are in Genesis -- and these days it's got to be pretty hard for any religion to maintain a literalist interpretation of its creation myth, if it has any regard for science whatsoever.

Replies from: Larks
comment by Larks · 2012-09-11T19:45:21.378Z · LW(p) · GW(p)

Ok, I guess I underestimated how many people believe in miracles.

Replies from: mfb
comment by mfb · 2012-09-12T14:40:35.322Z · LW(p) · GW(p)

I underestimated how bad survey questions can be.
"Do you completely agree / mostly agree / mostly disagree / completely disagree with: Miracles still occur as in ancient times" (I shortened the first part a bit, without changing the context)
Seriously, wtf? The question assumes that miracles occured in ancient times. It does not define what "miracle" means at all, and it does not ask if miracles occur at all, it asks for a trend. 79% of the answers were counted as "belief in", I think that those were the first two groups only (but I do not see that in the study).

However, the questions about heaven and hell are fine, and the large amount of "yes" answers (heaven 74%, hell 59%) makes me sad.

Funny numbers:
At least 15% believe that "good" people come to heaven, but "bad" do not come to hell. So where do "bad" people go? To heaven, too?
In the group of age 65+, 74% believe in heaven, but only 71% believe in a life after death. So at least 3% believe that "good" people will live in heaven after death, without living at all.

Replies from: ArisKatsaris, thomblake
comment by ArisKatsaris · 2012-09-12T14:49:44.584Z · LW(p) · GW(p)

Funny numbers: At least 15% believe that "good" people come to heaven, but "bad" do not come to hell. So where do "bad" people go? To heaven, too?

That's one option. Another option would be that bad people just cease to exist. Or they get reincarnated until they become non-bad enough for heaven.

comment by thomblake · 2012-09-12T14:53:31.887Z · LW(p) · GW(p)

To heaven, too?

The mormons would tell you, for the most part, yes. And they generally believe in heaven and not hell.

The question assumes that miracles occured in ancient times.

Indeed, I might have given a "completely agree" there given that miracles occurred none of the time in ancient times, and are still going strong at that rate. But maybe other respondents have less trouble with loaded questions.

In the group of age 65+, 74% believe in heaven, but only 71% believe in a life after death. So at least 3% believe that "good" people will live in heaven after death, without living at all.

Or those 3% believe in heaven but don't believe that dead people get to go there. It might just mean "God's house", or be reserved for those who are raptured.

comment by ChristianKl · 2012-09-11T20:48:09.604Z · LW(p) · GW(p)

No, the claim is that a scientific proof is sufficient from him to feel the need to change his beliefs. It isn't that it's necessary.

Replies from: Sophronius
comment by Sophronius · 2012-09-11T23:58:19.312Z · LW(p) · GW(p)

Technically true, but nice though that is, saying that scientific proof would force you to change your beliefs still isn't a very impressive show of rationality. It would be better if he had said "Whenever science and Buddhism conflict, Buddhism should change".

I know, it is good to hear it from a religious figure, but if it were any other subject the same claim would leave you indifferent. "If it were scientifically proven that aliens don't exist I will have to change my belief in them." Sound impressive? No? Then the Dalai Lama shouldn't get any more praise just because it's about religion.

Replies from: ChristianKl, MaoShan
comment by ChristianKl · 2012-09-12T08:17:47.131Z · LW(p) · GW(p)

When would you say that science and X is in conflict when there isn't scientific proof that X is wrong?

Science is a method. In itself it's about doing experiments. It's not about the ideology of the scientist that might conflict with X even if there's no proof that X is wrong.

Replies from: Sophronius, kilobug
comment by Sophronius · 2012-09-29T10:27:36.047Z · LW(p) · GW(p)

Science and X are in conflict when on the whole there is more scientific evidence that X is wrong than there is evidence that it is right. Saying "I'll change my belief if science proves me wrong" SOUNDS reasonable, but it is the kind of thing homeopaths say to pretend to be scientific while resting secure in the knowledge that they will never have to actually change their mind, because they can always say that it hasn't been "proven" yet.

comment by kilobug · 2012-09-12T10:35:49.192Z · LW(p) · GW(p)

There is no "scientific proof" that there are no aliens. There is no "scientific proof" that the earth is 4.7 billions of years old. Not in the sense that there is a "proof" of Bayes' theorem. And that's where all the problem is. You can't limit yourself with changing your believes when they are "proven" wrong. You should change your belief when they are at odd with evidence and Occam's Razor.

The Dalaï Lama believes in reincarnation (or at least he officially says so, I don't know what are his true believes and what is political position, but let's assume he's honest). There is no "scientific proof" that reincarnation is not possible, so he can't bolster how much he is open to science. But yet, if you understand science, the evidence that there is no such thing as reincarnation is overwhelming.

Replies from: ChristianKl
comment by ChristianKl · 2012-09-12T13:14:03.457Z · LW(p) · GW(p)

People like Carl Sagan were pretty confident that there are aliens out there in the universe. He's still has a reputation as a great rationalist.

The fact that you would personally Occam's Razor away reincarnation given what you know, doesn't mean that it's also rational for other people to Occam's Razor it away. Someone who remembers a past life and who knows other people who do it can be common sense to have a prior that reincarnation exists.

If you start with a piror that reincarnation exists I don't see the scientific evidence that suggest that you should drop the belief. Assuming that reincarnation is true makes some things that involve working with memories of past lifes easier. Occam's Razor is all about making things easier.

On a practical side there are scientists who believe that reincarnation into Boltzmann brains is plausible, given their current models of how the world works.
If you belief that there are random fluctuation in vaccum, time doesn't end somewhere in the future and that the existance of humans is completely accidental it's hard to avoid the conclusion that reincarnation will happen.

If you think you "understand" science than you aren't rational. Any good skeptic should belief that he doesn't understand it. Human have the habit of being much to confident in the beliefs that they hold. The Black Swan by Nassim Taleb is a great book.

comment by MaoShan · 2012-09-16T01:51:39.174Z · LW(p) · GW(p)

Has anyone taken the time to present to the Dalai Lama a list of things about Buddhism that science proves (or can convincingly demonstrate to be) wrong?

comment by Ezekiel · 2012-09-01T21:16:48.535Z · LW(p) · GW(p)

... which one wish, carefully phrased, could also provide.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-09-03T03:58:34.603Z · LW(p) · GW(p)

you can't wish for more wishes

Replies from: Mestroyer, Armok_GoB
comment by Mestroyer · 2012-09-05T21:08:33.424Z · LW(p) · GW(p)

"I wish for the result of the hypothetical nth wish I would make if I was allowed to make n wishes in the limit as n went to infinity each time believing that the next wish would be my only one and all previous wishes would be reversed, or if that limit does not exist, pick n = busy beaver function of Graham's number."

Replies from: CCC, faul_sname
comment by CCC · 2012-09-06T07:19:34.292Z · LW(p) · GW(p)

I can see a genie taking a shortcut here.

"In any story worth the tellin', that knows about the way of the world, the third wish is the one that undoes the harm the first two wishes caused."

— Granny Weatherwax, A Hat Full of Sky.

In short, the genie may well conclude that every m'th wish, for some m (Granny Weatherwax suggests here that 'm' is three) your wish would be to have never met the genie in the first place. At this point, if you're lucky, the genie will use a value of n that's a multiple of m. If you're unlucky, the genie will use a value of n that's km-1 for some integer k...

Alternatively, you'll end up with a genie who can't handle the math and does not understand what you're asking for.

comment by faul_sname · 2012-09-06T00:26:01.853Z · LW(p) · GW(p)

I'm pretty sure this would result in the genie killing you.

Replies from: Mestroyer, Dolores1984
comment by Mestroyer · 2012-09-06T01:16:37.759Z · LW(p) · GW(p)

Because I would wish to kill myself eventually? It's hard to imagine that I would do that, faced with unlimited wishes. If I got bored, I could just wish the boredom away.

Though on reflection this wish needs a safeguard against infinite recursion, and a bliss-universe for any simulated copies of me the genie creates to determine what my wishes would be.

Replies from: faul_sname, Decius
comment by faul_sname · 2012-09-06T02:36:38.758Z · LW(p) · GW(p)

Because I would wish to kill myself eventually?

In a sufficiently bad situation, you may wish for the genie to kill you because you think that's your only wish. It's not likely for any given wish, but would happen eventually (and ends the recursion, so that's one of the few stable wishes).

Replies from: Mestroyer
comment by Mestroyer · 2012-09-06T06:31:14.338Z · LW(p) · GW(p)

If I kill myself, there is no nth wish as n -> infinity, or a busy beaver function of Graham's numberth wish, so the first wish is wishing for something undefined.

Also, the probability of any of the individually improbable events where I kill myself happening is bounded above by the some of their probabilities, and they could be a convergent infinite series, if the probability of wanting to kill myself went down each time. Even though I stipulated that it's if I believed each wish was the last, I might do something like "except don't grant this wish if it would result in me wanting to kill myself or dying before I could consider the question" in each hypothetical wish. Or grant myself superintelligence as part of one of the hypothetical wishes, and come up with an even better safeguard when I found myself (to my great irrational surprise) getting another wish.

There is not even necessarily a tiny chance of wanting to kill myself. Good epistemology says to think there is, just in case, but some things are actually impossible. Using wishes to make it impossible for me to want to kill myself might come faster than killing myself.

Replies from: faul_sname
comment by faul_sname · 2012-09-06T06:37:11.383Z · LW(p) · GW(p)

If I kill myself, there is no nth wish as n -> infinity, or a busy beaver function of Graham's numberth wish, so the first wish is wishing for something undefined.

I think you're right, though I'm not sure that's exactly a good thing.

and they could be a convergent infinite series, if the probability of wanting to kill myself went down each time.

I see no particular reason to expect that to be the case.

Using wishes to make it impossible for me to want to kill myself might come faster than killing myself.

Excellent point. That might just work (though I'm sure there are still a thousand ways it could go mind-bogglingly wrong).

comment by Decius · 2012-09-26T18:13:02.548Z · LW(p) · GW(p)

If you did eventually wish for death, then you would have judged that death is the best thing you can wish for, after having tried as many alternatives as possible.

Are you going to try to use your prior (I don't want to die) to argue with your future self who has experienced the results (and determines that you would be happier dying right now than getting any wish)?

Replies from: Mestroyer
comment by Mestroyer · 2012-09-26T18:59:32.719Z · LW(p) · GW(p)

I would not want to kill myself if my distant-future self wanted to die or wanted me to die immediately. I think it is much more likely that I would accidentally self-modify in a manner I wouldn't like if I reflected on it now and that would lead to wishing for death than that my current self with additional knowledge would chose death over omnipotence.

I don't think the factual question "would I be happier dying right now" would necessarily be the one that decided the policy question of "will I chose do die" for both me and my future self, because we could each care about different things.

And with a warning "The way things are going, you'll end up wanting to die." I could change my wishes and maybe get something better.

Replies from: Decius
comment by Decius · 2012-09-27T17:43:15.748Z · LW(p) · GW(p)

"Does my current utility function contain a global maximum at the case where I wish for and receive death right now?" is a really scary question to ask a genie.

I would prefer "I wish for the world to changed in such a manner as my present actual utility function, (explicitly distinct from my perception of my utility function), is at the global maximum possible without altering my present actual utility function."

Or in colloquial language "Give me what I want, not what I think I want."

Replies from: Mestroyer
comment by Mestroyer · 2012-09-27T20:55:22.453Z · LW(p) · GW(p)

That sounds pretty scary too. I don't think I am close enough to being an agent to have a well-defined utility function. If I do (paradoxical as it sounds), it's probably not something I would reflectively like. For example, I think I have more empathy for things I am sexually attracted to. But the idea of a world where everyone else (excluding me and a few people I really like) is a second-class citizen to hot babes horrifies me. But with the wrong kind of extrapolation, I bet that could be said to be what I want.

I can't easily describe any procedure I know I would like for getting a utility function out of me. If I or some simulated copy of me remained to actually be deciding things, I think I could get things I would not only like, but like and like liking. Especially if I can change myself from an insane ape who wishes it was a rationalist, to an actual rationalist through explicitly specified modifications guided by wished-for knowledge.

The best way I can think of to ensure that the extrapolated utility function is something like whatever is used in making my decisions, is to just use the brain circuits I already have that do that the way I like.

I also think a good idea might be to have a crowd of backup copies of me. One of us would try making some self-modifications in a sandboxed universe where their wishes could not get outside, and then the others would vote on whether to keep them.

Replies from: Decius
comment by Decius · 2012-09-28T03:16:01.509Z · LW(p) · GW(p)

Well, you don't prefer a world "where everyone else (excluding me and a few people I really like) is a second-class citizen to hot babes horrifies me." to the current world. If you can express such a judgement preferring one universe over another, and those judgements are transitive, you have a utility function.

And if you want to like liking them, that is also part of your utility function.

One confounding factor that you do bring up- the domain of one's utility function really doesn't need to include things outside the realm of possibility.

Replies from: Mestroyer
comment by Mestroyer · 2012-09-28T06:18:48.934Z · LW(p) · GW(p)

Well if deep down I wish that hot babes were given higher status in society than ugly females and males I didn't know (I'll roll with this example even though with omnipotence I could probably do things that would make this a nonissue) and I wish that I didn't wish it, and the genie maximizes my inferred utility function once (not maximizing it once and then maximizing the new one that doing so replaces it with), we would end up with a world where hot babes had higher status and I wished they didn't. The only thing I can see saving us from this would be if I also wanted my wants to be fulfilled, whatever they were. But then we would end up with me assigning Flimple utility to 2 + 2 being equal to 4.

I understand the argument "If you preferred the world ruled by hot babes it wouldn't horrify you," but what if that's just because when I imagine things at a global scale they turn featureless, and appearances don't matter to me anymore, but when I see the people in person they still do? What if, being able to see and socialize with every hot babe in the world changed my mind about whether I would want them to have higher social status, even if I was also able to see and socialize with everyone else?

What if the appearance of transitivity is only from drawing analogies from the last scenario I thought about (A) to the next one (B), but if I started a chain of analogies from somewhere else (C) my view on B would be completely different, such that you could take exchange rates of certain things I considered good in each scenario and construct a cycle of preference?

Replies from: Decius
comment by Decius · 2012-09-28T07:03:31.014Z · LW(p) · GW(p)

There's a difference between finding a global maximum of a function and finding the local derivative and moving infinity towards a local maximum.

Even if your actual utility function were for one group to have higher status, that does not imply that the greatest utility comes at the highers imbalance.

And if your preferences aren't transitive, you don't have a valid utility function. If you find yourself in a cycle of preference, you have probably failed to accurately judge two or more things which are hard to compare.

Replies from: Mestroyer
comment by Mestroyer · 2012-09-28T07:40:33.028Z · LW(p) · GW(p)

The problem isn't that I might see a hot babe and care about her more than other people, but that when the numbers are bigger I care about everyone the same amount. It's that it probably dependent on things like whether i have seen them, and how much I know about them. If I was told by a perfectly reliable source that X is a hot babe by my standards, that would not make me care about her more than other people. But it would if I saw her. So what I want is not just dependent on what I believe, but on what I experience. On some figurative level, I'm a Mealy machine, not a Moore machine.

If you find yourself in a cycle of preference, you have probably failed to accurately judge two or more things which are hard to compare.

Why do you think this?

Replies from: Decius
comment by Decius · 2012-09-28T16:44:33.640Z · LW(p) · GW(p)

When I prefer A to B, B to C, and C to A (a cycle of preference), then I typically find that there is something moral in the general sense about A, something moral in the direct sense about B, and something which is personally advantageous about C.

For instance, I would rather my donations to charity have a larger total effect, which in the current world means donating to the best aggregator rather to individual causes. I would rather donate to individual causes than ignore them in selfish self-interest. I would rather spend my money in my own self interest than redistribute it in order to achieve maximum benefits. I believe that the reason I think my preferences lie this way is that I am unable to judge the value of diversified charity compared to selfish behavior.

In short, I am extrapolating from the single data point I have. I am assuming that there is one overwhelmingly likely reason for making that type of error, in which case it is likely that mine is for that reason and that yours is as well.

comment by Dolores1984 · 2012-09-06T02:25:04.312Z · LW(p) · GW(p)

A much more real concern is that the genie is going to need to create and destroy at least BB(G) copies of you in order to produce the data you seek. Which is not good.

comment by Armok_GoB · 2012-09-03T18:57:47.027Z · LW(p) · GW(p)

but does "I wish my wishes were divided up into 100 rounds of feedback each, each roughly equivalent to 1/100 of a wish" fall under that?

Replies from: wedrifid
comment by wedrifid · 2012-09-05T01:29:09.121Z · LW(p) · GW(p)

but does "I wish my wishes were divided up into 100 rounds of feedback each, each roughly equivalent to 1/100 of a wish" fall under that?

My impression would be that you could use one wish to divide a second wish up in such a manner. Using it to divide itself up would 'fall under'. (I'm not sure how the genie would resolve the wish into a practical outcome.)

Replies from: Armok_GoB
comment by Armok_GoB · 2012-09-05T16:56:37.745Z · LW(p) · GW(p)

Yea, what it does is give you 200 subwishes, not 300.

comment by Morendil · 2012-09-29T16:50:20.395Z · LW(p) · GW(p)

New ideas are sometimes found in the most granular details of a problem where few others bother to look. And they are sometimes found when you are doing your most abstract and philosophical thinking, considering why the world is the way that it is and whether there might be an alternative to the dominant paradigm. Rarely can they be found in the temperate latitudes between these two spaces, where we spend 99 percent of our lives.

-- Nate Silver, The Signal and the Noise

comment by simplicio · 2012-09-21T22:50:15.052Z · LW(p) · GW(p)

But since miracles were produced according to the capacity of the common people who were completely ignorant of the principles of natural things, plainly the ancients took for a miracle whatever they were unable to explain in the manner the common people normally explained natural things, namely by seeking to recall something similar which can be imagined without amazement. For the common people suppose they have satisfactorily explained something as soon as it no longer astounds them.

(Baruch Spinoza)

comment by [deleted] · 2012-09-14T10:48:16.808Z · LW(p) · GW(p)

the fact that ordinary people can band together and produce new knowledge within a few months is anything but a trifle

-- Dienekes Pontikos, Citizen Genetics

comment by RobinZ · 2012-09-11T18:30:18.742Z · LW(p) · GW(p)

Intelligence about baseball had become equated in the public mind with the ability to recite arcane baseball stats. What [Bill] James's wider audience had failed to understand was that the statistics were beside the point. The point was understanding; the point was to make life on earth just a bit more intelligible; and that point, somehow, had been lost. "I wonder," James wrote, "if we haven't become so numbed by all these numbers that we are no longer capable of truly assimilating any knowledge which might result from them."

Michael Lewis, Moneyball, ch. 4 ("Field of Ignorance")

comment by Athrelon · 2012-09-11T15:50:48.671Z · LW(p) · GW(p)

The use of Fashions in thought is to distract the attention of men from their real dangers. We direct the fashionable outcry of each generation against those vices of which it is least in danger and fix its approval on the virtue nearest to that vice which we are trying to make endemic...Thus we make it fashionable to expose the dangers of enthusiasm at the very moment when they are all really becoming worldly and lukewarm; a century later, when we are really making them all Byronic and drunk with emotion, the fashionable outcry is directed against the dangers of the mere "understanding". Cruel ages are put on their guard against Sentimentality, feckless and idle ones against Respectability, lecherous ones against Puritansm; and whenever all men are really hastening to be slaves or tyrants we make Liberalism the prime bogey.

CS Lewis, The Screwtape Letters

comment by Stabilizer · 2012-09-04T17:22:25.851Z · LW(p) · GW(p)

Mathematics is a process of staring hard enough with enough perseverance at the fog of muddle and confusion to eventually break through to improved clarity. I'm happy when I can admit, at least to myself, that my thinking is muddled, and I try to overcome the embarrassment that I might reveal ignorance or confusion. Over the years, this has helped me develop clarity in some things, but I remain muddled in many others. I enjoy questions that seem honest, even when they admit or reveal confusion, in preference to questions that appear designed to project sophistication.

-- William Thurston

comment by katydee · 2012-09-22T00:13:11.902Z · LW(p) · GW(p)

A noble man compares and estimates himself by an idea which is higher than himself; and a mean man, by one lower than himself.

Marcus Aurelius

Replies from: simplicio
comment by simplicio · 2012-09-22T00:40:27.015Z · LW(p) · GW(p)

Meh, there are worse things to be than a mean man.

Replies from: MinibearRex
comment by MinibearRex · 2012-09-24T01:12:42.066Z · LW(p) · GW(p)

There are considerably more worse things to be than a noble one.

comment by Alicorn · 2012-09-16T22:31:22.164Z · LW(p) · GW(p)

Never do today what you can put off till tomorrow if tomorrow might improve the odds.

— Robert A. Heinlein

Replies from: Legolan, MixedNuts
comment by Legolan · 2012-09-17T00:38:11.073Z · LW(p) · GW(p)

I think that quote is much too broad with the modifier "might." If you should procrastinate based on a possibility of improved odds, I doubt you would ever do anything. At least a reasonable degree of probability should be required.

Not to mention that the natural inclination of most people toward procrastination means that they should be distrustful of feelings that delaying will be beneficial; it's entirely likely that they are misjudging how likely the improvement really is.

That's not, of course, to say that we should always do everything as soon as possible, but I think that to the extent that we read the plain meaning from this quote, it's significantly over-broad and not particularly helpful.

Replies from: Alicorn
comment by Alicorn · 2012-09-17T00:50:52.985Z · LW(p) · GW(p)

There's also natural inclinations towards haste and impatience. (They probably mostly crop up around different things / in different people than procrastinatory urges, but the quote is not specific about what it is you could put off.)

Replies from: Legolan, RobinZ
comment by Legolan · 2012-09-17T01:20:31.701Z · LW(p) · GW(p)

That's certainly a fair point.

I suppose it's primarily important to know what your own inclinations are (and how they differ in different areas) and then try to adjust accordingly.

comment by RobinZ · 2012-09-17T01:21:20.162Z · LW(p) · GW(p)

I'm reminded of the saying, "A weed is just a plant in the wrong place." Different people require different improvements to their strategies.

comment by MixedNuts · 2012-09-16T22:59:38.133Z · LW(p) · GW(p)

Do it today, and fix/retry tomorrow on failure?

Replies from: Alicorn
comment by Alicorn · 2012-09-16T23:37:04.147Z · LW(p) · GW(p)

Perhaps it's a one-time thing.

comment by lukeprog · 2012-09-15T07:56:22.556Z · LW(p) · GW(p)

You don’t have to know all the answers, you just need to know where to find them.

Albert Einstein (maybe)

Cf. this and this.

comment by roland · 2012-09-04T23:28:37.030Z · LW(p) · GW(p)

It seems clear that intelligence, as such, plays no part in the matter-that the sole and essential thing is use.

--Oliver Sacks regarding patients suffering from "developmental agnosia" who first learned to use their hands as adults.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2012-09-09T20:23:59.918Z · LW(p) · GW(p)

Can you provide a citation for this? I am very interested in this topic and would like to read the book or article.

Replies from: roland
comment by roland · 2012-09-12T17:11:58.764Z · LW(p) · GW(p)

It's in the book "The Man Who Mistook His Wife For A Hat and other clinical tales" chapter 5 "Hands".

comment by RomanDavis · 2012-09-01T12:18:46.560Z · LW(p) · GW(p)

A scientific theory

Isn't just a hunch or guess

It's more like a question

That's been put through a lot of tests

And when a theory emerges

Consistent with the facts

The proof is with science

The truth is with science

They Might Be Giants

comment by RomanDavis · 2012-09-14T07:54:13.342Z · LW(p) · GW(p)

Users always have an idea that what they want is easy, even if they can't really articulate exactly what they do want. Even if they can give you requirements, chances are those will conflict – often in subtle ways – with requirements of others. A lot of the time, we wouldn't even think of these problems as "requirements" – they're just things that everyone expects to work in "the obvious way". The trouble is that humanity has come up with all kinds of entirely different "obvious ways" of doing things. Mankind's model of the universe is a surprisingly complicated one.

Jon Skeet

comment by [deleted] · 2012-09-09T09:09:20.397Z · LW(p) · GW(p)

Your life has a limit, but knowledge has none. If you use what is limited to pursue what has no limit, you will be in danger. If you understand this and still strive for knowledge, you will be in danger for certain.

--Zhuangzi, being a trendy metacontrarian post-rationalist in the 4th century BC

Replies from: None
comment by [deleted] · 2012-09-09T09:17:02.212Z · LW(p) · GW(p)

Zhuangzi says knowledge has no limit, one could spend his entire life making a good map of a vast and diverse territory and it would not be enough to make a good map.

If one does not know this and makes maps for travel, he may be travelling to safe lands. This is weak evidence one is in danger.

If one knows this and still makes such maps, this is strong evidence one is in danger, for to travel to safe lands he would not make such foolhardy attempts.

comment by RobinZ · 2012-09-05T20:52:08.791Z · LW(p) · GW(p)

[...] Three years later a top executive for those same San Diego Padres would say that the reason the Oakland A's win so many games with so little money is that "Billy [Beane, the general manager] got lucky with those pitchers."

And he did. But if an explanation is where the mind comes to rest, the mind that stopped at "lucky" when it sought to explain the Oakland A's recent pitching success bordered on narcoleptic.

Michael Lewis, Moneyball, Chapter Ten, "Anatomy of an Undervalued Pitcher".

comment by chaosmosis · 2012-09-30T01:18:43.242Z · LW(p) · GW(p)

Cultural critics like to speculate on the cognitive changes induced by new forms of media, but they rarely invoke the insights of brain science and other empirical research in backing up those claims. All too often, this has the effect of reducing their arguments to mere superstition.

Steven Johnson, Everything Bad is Good For You

(His book argues that pop culture is increasing intelligence, not dumbing it down. He argues that plot complexity has increased and that keeping track of large storylines is now much more common place, and that these skills manifest themselves in increased social intelligence (and this in turn might manifest itself in overall intelligence, I'm not sure). Here, he's specifically discussing video games and the internet.)

I highly recommend the book, it's interesting in terms of cognitive science as well as cultural and social analysis. I thought it sounded only mildly interesting when I first picked it up, but now I'm thinking more along the lines that it's extremely interesting. At least give it a try, because it's difficult to describe what makes it so good.

Replies from: gwern, NancyLebovitz
comment by gwern · 2012-10-01T01:06:24.161Z · LW(p) · GW(p)

Really? I thought it was very short and not in depth at all; yeah, his handful of graphs of episodes was interesting from the data visualization viewpoint, but most of his arguments, such as they were, were qualititative and hand-wavey. (What, there are no simplistic shows these days?)

Replies from: chaosmosis
comment by chaosmosis · 2012-10-01T03:02:59.783Z · LW(p) · GW(p)

It was rather broad and not very in depth, but it was largely conceptually oriented. He conceded that there were simplistic shows, but argued that the simplistic shows of today tend to be more complicated than the simplistic shows of yesterday. If you disagree...

Replies from: gwern
comment by gwern · 2012-10-01T03:09:12.231Z · LW(p) · GW(p)

I don't know how I'd refute him - there are so many TV shows, both now and then! One can cherrypick pretty much anything one likes, although I don't personally watch TV anymore and couldn't do it.

(I'm reminded how people online sometimes say 'anime really sucked in time period X', because they're only familiar with anime released in the '00s and '10s, while if you look at an actual full 30+ strong roster of one of their example 'sucking' years like eg. 1991, you'll often see a whole litany of great or influential series like Nadia, City Hunter, Ranma 1/2, Dragon Ball Z, and Gundam 0083: Stardust Memory. Well, yeah, if you forget entirely about them, I suppose 1991 seems like a really sucky year compared to 2010 or whatever.)

Replies from: Nominull, chaosmosis, Eugine_Nier
comment by Nominull · 2012-10-02T00:07:53.762Z · LW(p) · GW(p)

Those anime you cite all sucked though, they were considered "great" or "influential" at the time because people didn't know any better. Anime technology has advanced vastly in the past twenty years.

Replies from: gwern
comment by gwern · 2012-10-02T00:51:37.692Z · LW(p) · GW(p)

Anime technology has advanced, yes, but I don't know how you go from that to 'all my examples sucked'.

Replies from: Eliezer_Yudkowsky, Nominull
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-02T01:03:39.715Z · LW(p) · GW(p)

Out of curiosity, what in the 90s compares to Hikaru no Go or Madoka Magica?

Replies from: None, gwern, Risto_Saarelma, ArisKatsaris
comment by [deleted] · 2012-10-02T03:09:50.059Z · LW(p) · GW(p)

Serial Experiments Lain.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-10-02T09:04:21.160Z · LW(p) · GW(p)

Serial Experiments Lain severely disappointed me. It's nicely creepy and atmospheric but....
(rot13) vg'f ernyyl n fgnaqneq "punatryvat" fgbel -- ohg vafgrnq bs snvevrf naq zntvpny jbeyqf naq punatryvat puvyqera, jr unir cebtenzf naq gur Vagrearg naq cebtenzf orpbzvat syrfu.

Gur fpvrapr-svpgvba ryrzragf srry whfg pbfzrgvp punatrf jura gur pber bs gur fgbel vf cher snvel-gnyr... N tevz snvel-gnyr gb or fher, ohg n snvel-gnyr abarguryrff.

comment by gwern · 2012-10-02T01:07:27.129Z · LW(p) · GW(p)

I don't consider Hikaru no Go to be anything more than a gimmick anime like Moyashimon, so I have no idea for it.

The most obvious counterpart to Madoka would be Evangelion (yeah I know Sailor Moon was airing in the '90s and was more popular and influential than Madoka will ever be, but I think Eva is a better comparison).

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-10-02T11:42:01.039Z · LW(p) · GW(p)

Exactly. It doesn't look like I'm going to finish Hikaru no Go by the end of the year, but I finished Serial Experiments Lain (a 90s anime) in less than 3 days.

comment by Risto_Saarelma · 2012-10-02T11:10:07.924Z · LW(p) · GW(p)

Seconding Serial Experiments Lain and Evangelion. Also Cowboy Bebop was in the 90s.

Irresponsible Captain Tylor, Berserk, Excel Saga and Trigun are uneven, but have their moments.

I also have a soft spot for the trashy ultraviolent OVA stuff from the early 90s, like Doomed Megalopolis and AD Police Files, but I'm not sure if it's good in any objective sense.

comment by ArisKatsaris · 2012-10-02T09:19:41.802Z · LW(p) · GW(p)

Heh... In my myanimelist profile I've only listed three anime series as favourites, and Hikaru no Go and Madoka Magica are two of them.

The third one is "Revolutionary Girl Utena", from the 1990s. I think it's the sort of series that one either loves or hates -- but I loved it.

comment by Nominull · 2012-10-02T02:34:35.763Z · LW(p) · GW(p)

That was explanation or elaboration, not evidence. I was going to just leave "they sucked" as a bare assertion rather than get into an anime slapfight on LessWrong. If you link me to your anime blog I will be happy to take it up in the comments section there, though.

Replies from: gwern
comment by gwern · 2012-10-02T02:44:14.931Z · LW(p) · GW(p)

Alas, I have no anime blog!

comment by chaosmosis · 2012-10-01T05:23:30.481Z · LW(p) · GW(p)

You could analyze the way that people in the TV business think and talk about complexity, while assuming that they know what they're doing. He seemed to do a bit of this.

comment by Eugine_Nier · 2012-10-02T00:05:12.831Z · LW(p) · GW(p)

I don't know how I'd refute him - there are so many TV shows, both now and then!

I'd start by looking at the shows with the highest ratings.

comment by NancyLebovitz · 2012-10-01T04:52:04.912Z · LW(p) · GW(p)

Does he look at the possibility that people are getting more intelligent for some other reason, and popular art is the result of creators serving a more intelligent audience rather than more complex art making people smarter?

Replies from: chaosmosis
comment by chaosmosis · 2012-10-01T05:21:45.921Z · LW(p) · GW(p)

No. But your question seems odd. I didn't interpret the book as an attempt to start with the increase in intelligence and then to assume/explain why pop culture was the cause. Rather, I interpreted the book as an attempt to analyze pop culture, which then found that pop culture did things that seemed like they would have beneficial effects. His analysis of the things that pop culture does in our minds is what I found interesting, not so much the parts which talked about intelligence more generally.

Additionally, I'm not really sure what someone would do to identify pop culture as the cause of this increase as opposed to something else. I'm not sure what other factors could be responsible.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-10-01T11:30:15.502Z · LW(p) · GW(p)

I was reacting to the title of the book.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-01T17:17:42.142Z · LW(p) · GW(p)

I don't believe the title implies that his primary concern is explaining an intelligence increase.

There are two ways of looking at the interaction between pop culture and intelligence. You can start by analyzing intelligence and noticing that it seems to increase, and then trying to figure out why, and then figuring out that pop culture caused it. Or, you can start by analyzing pop culture, and then noticing that it seems to do things that would have cognitive benefits, and then attaching this to the increase in intelligence as a factor that helps explain it. The book does the latter, not the former.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-10-01T17:57:19.537Z · LW(p) · GW(p)

I think any link between tv and intelligence is unproven, but at least the book does something to debunk the common idea that television is making people stupider.

comment by Jayson_Virissimo · 2012-09-17T05:25:47.518Z · LW(p) · GW(p)

For nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture.

-William of Ockham

Replies from: wedrifid
comment by wedrifid · 2012-09-17T06:11:51.059Z · LW(p) · GW(p)

This is an interesting quote for historical reasons but it is not a rationality quote.

Replies from: Eliezer_Yudkowsky, Jayson_Virissimo
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-17T10:18:42.921Z · LW(p) · GW(p)

It makes a very important reply to anyone who claims that e.g. you should stick with Occam's original Razor and not try to rephrase it in terms of Solomonoff Induction because SI is more complicated.

Replies from: JulianMorrison, wedrifid
comment by JulianMorrison · 2012-09-17T10:57:05.853Z · LW(p) · GW(p)

Humans and their silly ideas of what's complicated or not.

What I find ironic is that SI can be converted into a similarly terse commandment. "Shorter computable theories have more weight when calculating the probability of the next observation, using all computable theories which perfectly describe previous observations" -- Wikipedia.

Replies from: khafra
comment by khafra · 2012-09-17T16:01:23.174Z · LW(p) · GW(p)

Hrm.... I'm not sure if I can sufficiently state the speed prior in natural language faster than that, without some really good auctioneer training.

comment by wedrifid · 2012-09-17T10:31:03.340Z · LW(p) · GW(p)

It makes a very important reply to anyone who claims that e.g. you should stick with Occam's original Razor and not try to rephrase it in terms of Solomonoff Induction because SI is more complicated.

I take it you mean in the sense "Really? Look how terrible the original is! You've got to be kidding."

comment by Jayson_Virissimo · 2012-09-17T07:39:00.984Z · LW(p) · GW(p)

I read this as a reminder not to add anything to that map that won't help you navigate the territory. How is this not a rationality quote? Are you rejecting it merely because of the third disjunct?

Replies from: wedrifid
comment by wedrifid · 2012-09-17T08:18:38.230Z · LW(p) · GW(p)

I read this as a reminder not to add anything to that map that won't help you navigate the territory.

The quote doesn't say that, this is (only) a fact about your reading.

How is this not a rationality quote? Are you rejecting it merely because of the third disjunct?

I'm not especially impressed with the first two either, nor the claim to be exhaustive (thus excluding other valid evidence). It basically has very little going for it. It is bad epistemic advice. It is one of many quotes which require abandoning most of the content and imagining other content that would actually be valid. I reject it as I reject all such examples.

comment by RobertLumley · 2012-09-07T19:53:37.372Z · LW(p) · GW(p)

"How many lives do you suppose you've saved in your medical career? … Hundreds? Thousands? Do you suppose those people give a damn that you lied to get into Starfleet Medical? I doubt it. We deal with threats to the Federation that jeopardize its very survival. If you knew how many lives we’ve saved, I think you’d agree that the ends do justify the means.”

Luther Sloan to Juilian Bashir in Star Trek: Deep Space Nine, “Inquisition”, written by Bradley Thompson and David Weddle, created by Rick Berman and Michael Piller

Replies from: Vaniver
comment by Vaniver · 2012-09-07T20:05:28.038Z · LW(p) · GW(p)

"How many lives do you suppose you've saved in your medical career? … Hundreds? Thousands? Do you suppose those people give a damn that you lied to get into Starfleet Medical? I doubt it.

Presuming that Starfleet Medical has limited enrollment, and that if he hadn't lied, a superior candidate would have enrolled, then that superior candidate would have saved those hundreds or thousands, and then a few more.

Replies from: Alicorn, mrglwrf, GLaDOS
comment by Alicorn · 2012-09-07T20:07:02.637Z · LW(p) · GW(p)

He was lying about having had gene therapy. He was a superior candidate by virtue of same but it would have kept him out because Starfleet is anti-gene-therapy-ist. (At least I assume so - I remember the character had the therapy and had to hide it, but not whether it came out in that episode or something else did.)

Replies from: Vaniver, RobertLumley
comment by Vaniver · 2012-09-07T20:16:05.329Z · LW(p) · GW(p)

He was lying about having had gene therapy.

That is much more justifiable than the standard case of lying on applications.

He was a superior candidate by virtue of same but it would have kept him out because Starfleet is anti-gene-therapy-ist.

I can imagine Star Robin Hanson writing an angry blog post about what this implies about Starfleet's priorities.

Replies from: GLaDOS, Alicorn, taelor
comment by GLaDOS · 2012-09-07T20:22:17.001Z · LW(p) · GW(p)

I can imagine Star Robin Hanson writing an angry blog post about what this implies about Starfleet's priorities.

Have you seen any Star Trek? Star Robin Hanson would have a lot of angry posts to write.

Replies from: Vaniver
comment by Vaniver · 2012-09-07T20:47:04.051Z · LW(p) · GW(p)

Have you seen any Star Trek?

Some, as a child.

comment by Alicorn · 2012-09-07T20:24:08.919Z · LW(p) · GW(p)

There was a (flimsy) historical reason - there had been wars about "augments" in the past; the anti-augments won (somehow), determined the war was about "people setting themselves above their fellow humans", and discouraged more people augmenting themselves/their children in this way by (ineffectively) making it a net negative.

Replies from: Nominull, CronoDAS
comment by Nominull · 2012-09-08T16:24:41.853Z · LW(p) · GW(p)

Heck, anti-fascism beat fascism, it's not always the stronger-seeming ideology that comes out on top.

Replies from: DanArmak
comment by DanArmak · 2012-09-08T20:45:13.124Z · LW(p) · GW(p)

Anti-fascism is perhaps more usefully described as pro-something else. In the event, communism.

comment by CronoDAS · 2012-09-10T00:34:06.367Z · LW(p) · GW(p)

I read somewhere that, in Star Trek land, genetic engineering of intelligent beings is highly correlated with evil, either because it's being done for an evil purpose to begin with or because the engineered beings themselves end up as arrogant, narcissistic jerks with a strong tendency toward becoming evil. The latter implies that there's a technical problem with the genetic engineering of humans that hasn't been solved yet, which Bashir was lucky to have avoided.

Replies from: CCC
comment by CCC · 2012-09-10T06:35:38.818Z · LW(p) · GW(p)

It might not be a technical problem. It might merely be that most augments are raised by people who keep telling them that they're genetically superior to everyone else and therefore create in them a sense of arrogance and entitlement. Which is only made worse by the fact that they actually are stronger, healthier and smarter than everyone else (but not by as big a margin as they tend to imagine).

comment by RobertLumley · 2012-09-07T22:30:34.408Z · LW(p) · GW(p)

This is correct. He lied about not being genetically engineered.

comment by mrglwrf · 2012-09-10T21:01:37.547Z · LW(p) · GW(p)

I see no good reason to presume a correlation between a med school's admissions criteria and total lives saved over a doctor's career as tight as this reasoning requires. Or to presume that it is near certain that if he hadn't lied, another liar wouldn't have been accepted in his place.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-11T04:48:08.277Z · LW(p) · GW(p)

This reasoning merely requires that the correlation exist and be positive.

comment by GLaDOS · 2012-09-07T20:09:05.401Z · LW(p) · GW(p)

Right but to nitpick just to show off my nerdiness if I recall right Juilian Bashir wouldn't have been admitted to Starfleet medical because he was a genetically engineered human and those are barred from Starfleet and some other professions because of cultural baggage from the Eugenics Wars.

That was the thing he lied about, so it doesn't seem likely someone taking his place would have saved more lives in fact he may have saved fewer lives.

comment by katydee · 2012-09-12T00:11:49.363Z · LW(p) · GW(p)

A scientist, like a warrior, must cherish no view. A 'view' is the outcome of intellectual processes, whereas creativity, like swordsmanship, requires not neutrality, or indifference, but to be of no mind whatsoever.

Buckaroo Banzai

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-12T00:52:23.398Z · LW(p) · GW(p)

...

...

dur....

....

Replies from: army1987, katydee
comment by A1987dM (army1987) · 2012-09-12T13:56:09.780Z · LW(p) · GW(p)

What?

comment by katydee · 2012-09-12T01:33:57.062Z · LW(p) · GW(p)

I'll take the new -5 karma hit to point out that this comment shouldn't be downvoted. It is an interesting critique of the post it replies to.

Replies from: gwern, TimS, army1987
comment by gwern · 2012-09-12T01:38:11.740Z · LW(p) · GW(p)

How is it a critique? The quote is an adequate expression of Eliezer's own third virtue of rationality, and I daresay if anyone had responded as uncharitably as that to his "Twelve Virtues", he would have considered 'dur' to be an adequate summary of that person's intellect.

Replies from: Vaniver, thomblake
comment by Vaniver · 2012-09-12T01:44:20.355Z · LW(p) · GW(p)

The critique is of the phrase "but to be of no mind whatsoever."

The uncharitable interpretation is that something without a mind is a rock; the charitable interpretation is to take "mind" as "opinion."

I ended up downvoting the criticism because it doesn't apply to the substance of the quote, but to its word choice, and is itself not as clear as it could be.

Replies from: Eliezer_Yudkowsky, Fyrius, katydee, FiftyTwo
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-12T13:43:50.693Z · LW(p) · GW(p)

The criticism is that a martial artist or scientist is actually trying to attain a highly specific brain-state in which neurons have particular patterns in them; a feeling of emptiness, even if part of this brain state, is itself a neural pattern and certainly does not correspond to the absence of a mind.

The zeroth virtue or void - insofar as we believe in it - corresponds to particular mode of thinking; it's certainly not an absence of mind. Emptiness, no-mind, the Void of Musashi, all these things are modes of thinking, not the absence of any sort of reified spiritual substance. See also the fallacy of the ideal ghost of perfect emptiness in philosophy.

Replies from: Vaniver, None, robert-miles
comment by Vaniver · 2012-09-12T14:48:11.068Z · LW(p) · GW(p)

And this critique I upvoted, because it is both clear and a valuable point. I still think you're using an uncharitable definition of the word "mind," but as assuming charity could lead to illusions of transparency it's valuable to have high standards for quotes.

comment by [deleted] · 2012-09-12T13:52:42.543Z · LW(p) · GW(p)

See also the fallacy of the ideal ghost of perfect emptiness in philosophy.

You've mentioned this before, and I don't really know where it comes from. Do you have any specific philosopher or text in mind, or is this just a habit your perceive in philosophical argument? If so, in whose argument? Professional or historical or amateur philosophers?

Aside from some early-modern empiricists, and maybe Stoicism, I can't think of anything.

comment by Fyrius · 2012-09-12T13:27:28.056Z · LW(p) · GW(p)

I'm amazed how you guys manage to get all that from "dur". My communication skills must be worse than I thought.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-13T03:23:09.390Z · LW(p) · GW(p)

Context helps.

comment by katydee · 2012-09-12T05:04:15.152Z · LW(p) · GW(p)

I agree that the response was not particularly charitable, but it's nevertheless generally a type of post that I would like to see more of on LessWrong-- I think that style of reply can be desirable and funny. See also this comment.

comment by FiftyTwo · 2012-09-12T15:22:21.610Z · LW(p) · GW(p)

the charitable interpretation is to take "mind" as "opinion."

My interpretation was that it was advising system 1 rather than system 2 reasoning, thus no mind being no explicit thoughts.

comment by thomblake · 2012-09-12T13:36:25.806Z · LW(p) · GW(p)

How is it uncharitable? Eliezer is emptying his mind as recommended by Doctor Banzai. Not sure how it's a "critique" though.

Replies from: RomanDavis
comment by TimS · 2012-09-12T14:33:33.503Z · LW(p) · GW(p)

interesting?

comment by A1987dM (army1987) · 2012-09-12T13:56:46.133Z · LW(p) · GW(p)

Probably it would be even more interesting if I could understand it.

Replies from: katydee
comment by katydee · 2012-09-12T14:20:50.521Z · LW(p) · GW(p)

Eliezer posted a comment that's essentially devoid of content. This satirizes the original quote's claim that one should be of "no mind whatsoever" by illustrating that mindlessness isn't particularly useful-- a truly mindless individual (like that portrayed in the comment) would have no useful contributions to make.

Replies from: army1987, Will_Newsome
comment by A1987dM (army1987) · 2012-09-12T16:10:14.326Z · LW(p) · GW(p)

That went completely over my head. (I guessed he was alluding to some concept whose name began with “dur”, but I couldn't think of any relevant one.)

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-09-18T13:39:10.442Z · LW(p) · GW(p)

I interpreted 'mind' as 'opinion', so I didn't get it either.

comment by Will_Newsome · 2012-10-02T05:53:33.778Z · LW(p) · GW(p)

"No mind" is ordinary mind.

comment by Peter Wildeford (peter_hurford) · 2012-09-01T18:19:13.534Z · LW(p) · GW(p)

"Is this a victory or a defeat? Is this justice or injustice? Is it gallantry or a rout? Is it valor to kill innocent children and women? Do I do it to widen the empire and for prosperity or to destroy the other's kingdom and splendor? One has lost her husband, someone else a father, someone a child, someone an unborn infant... What's this debris of the corpses?" -- Ashoka

comment by pragmatist · 2012-09-14T14:34:36.452Z · LW(p) · GW(p)

If I say of myself that it is only from my own case that I know what the word "pain" means -- must I not say the same of other people too? And how can I generalize the one case so irresponsibly?

Now someone tells me that he knows what pain is only from his own case! Suppose everyone had a box with something in it: we call it a "beetle". No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle. -- Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. -- But suppose the word "beetle" had a use in these people's language? -- If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. -- No, one can 'divide through' by the thing in the box; it cancels out, whatever it is.

That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation' the object drops out of consideration as irrelevant.

-- Ludwig Wittgenstein, Philosophical Investigations

comment by shminux · 2012-09-13T22:50:21.906Z · LW(p) · GW(p)

I'll risk a bit of US politics, just because I like the quote:

While some observers might find his lack of philosophical consistency a problem, I see it as a plus. He's a pragmatist. If he were running for the job of Satan he would say he's in favor of evil, at least until he got the job and installed central air conditioning in Hell. To put it more bluntly, it's not his fault that so many citizens are idiots and he has to lie to them just to become a useful public servant.

Scott Adams on one of the two presidential candidates being skilled at the art of winning (with some liberal use of dark arts).

comment by alex_zag_al · 2012-09-05T03:45:39.445Z · LW(p) · GW(p)

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him saying, "And therefore such-and-such is true.

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that, then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so . . ." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can only prove trivial theorems, because every theorem that's proved is trivial."

The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

From "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character

Replies from: CCC, VKS
comment by CCC · 2012-09-05T06:58:43.345Z · LW(p) · GW(p)

I've heard it said that "Trivial" is a mathematics professor's proof by intimidation.

comment by VKS · 2012-09-05T07:12:06.255Z · LW(p) · GW(p)

The view, I think, is that anything you can prove immediately off the top of your head is trivial. No matter how much you have to know. So, sometimes you get conditional trivialities, like "this is trivial if you know this and that, but I don't know how to get this and that from somesuch...".

Replies from: GDC3
comment by GDC3 · 2012-10-02T06:08:27.550Z · LW(p) · GW(p)

Relatedly, a mathematician friend said that he uses "obvious" to mean "there exists a very short proof of it." He has been sometimes known to say things like "I think this is obvious but I'm not sure why yet."

comment by Vaniver · 2012-09-28T02:19:00.779Z · LW(p) · GW(p)

I believe in getting into hot water; it keeps you clean.

-- G.K. Chesterton

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-28T07:54:42.517Z · LW(p) · GW(p)

But so does lukewarm water (which is also cheaper, and doesn't steam up the mirror in the bathroom).

comment by shminux · 2012-09-17T22:32:40.455Z · LW(p) · GW(p)

I cannot tell if this is rationality or anti-rationality:

Q: What is Microsoft's plan if Windows 8 doesn't take off?

A: You know, Windows 8 is going to do great.

Q: No doubt at all?

A: I'm not paid to have doubts. (Laughs.) I don't have any. It's a fantastic product. ...

Steve Ballmer

Replies from: Desrtopa
comment by Desrtopa · 2012-09-17T22:36:04.024Z · LW(p) · GW(p)

I'd saying telling an interviewer you have sufficient confidence in your product to not need a backup plan is rational, actually not having one isn't.

Replies from: gwern, shminux, chaosmosis
comment by gwern · 2012-09-17T22:52:49.671Z · LW(p) · GW(p)

I'm reminded of a quote in Lords of Finance (which I finished yesterday) which went something like 'Only a fool asks a central banker about the currency and expects an honest answer'. Since confidence is what keeps banks and currencies going...

comment by shminux · 2012-09-17T23:02:53.363Z · LW(p) · GW(p)

See, if instead of "I'm not paid to have doubts." he said "I am paid to address all doubts before a product is released", that would have made more sense.

comment by chaosmosis · 2012-09-17T22:48:43.649Z · LW(p) · GW(p)

I'm not paid to have doubts. (Laughs.) I don't have any.

This comes across as inauthentic and slightly scared to me. At best, he's not great at PR. At worst, he doesn't have any back up plan. So that would support calling it irrationality.

telling an interviewer you have sufficient confidence in your product to not need a backup plan is rational

Well. I was thinking about it, and it seems like not having a backup plan is the kind of thing that would send bad signals to investors and whatnot. It's not clear to me that he's better off doing this than explaining how Microsoft is a fantastically professional company that's innovating and reaching into new frontiers, etc.

actually not having one isn't

I don't know specifically what alternate products would potentially be good ideas for them though. I agree that backup plans are good in general but I don't know if they're good for Microsoft specifically, based on the resources they have. Windows is kind of their thing, I don't know if they could execute on anything else.

comment by chaosmosis · 2012-09-13T16:07:35.759Z · LW(p) · GW(p)

All are lunatics, but he who can analyze his delusions is called a philosopher.

Ambrose Bierce

comment by Jayson_Virissimo · 2012-09-11T03:46:46.131Z · LW(p) · GW(p)

Fictional shows are merely gripping lies.

-Bryan Caplan, Selfish Reasons to Have More Kids

Replies from: Desrtopa
comment by Desrtopa · 2012-09-11T03:58:41.759Z · LW(p) · GW(p)

I'd pick gripping lies over most nonfictional shows, which are mainly irrelevant or misleading truths.

comment by CG_Morton · 2012-09-04T17:27:55.768Z · LW(p) · GW(p)

Wish 1: "I wish for a paper containing the exact wording of a wish that, when spoken to you, would meet all my expectations for a wish granting X." For any value of X.

Wish 2: Profit.

Three wishes is overkill.

Replies from: Alicorn, Eliezer_Yudkowsky, kilobug, DanielLC, TheOtherDave
comment by Alicorn · 2012-09-04T18:29:23.841Z · LW(p) · GW(p)

Genie provides a 3,000 foot long scroll, which if spoken perfectly will certainly do as you ask, but if spoken imperfectly in any of a million likely ways affords the genie room to screw you over.

Or the scroll is written in Martian.

Replies from: CG_Morton, Strange7, Hawisher
comment by CG_Morton · 2012-09-07T13:53:35.350Z · LW(p) · GW(p)

I just take this as evidence that I -can't- beat the genie, and don't attempt any more wishes.

Whereas, if it's something simple then I have pretty strong evidence that the genie is -trying- to meet my wishes, that it's a benevolent genie.

comment by Strange7 · 2012-09-04T19:21:36.374Z · LW(p) · GW(p)

Wish 2: I wish for a text-to-speech device capable of reading from this scroll with perfect accuracy.

Wish 3: delegated to the device from #2.

Replies from: shminux, Alicorn, CCC
comment by shminux · 2012-09-04T21:00:54.696Z · LW(p) · GW(p)

Are we going to keep patching up every hole she points out? Or admit that a UFAI genie can be smarter than any human (even if that human is our esteemed Alicorn, or (gasp!) Eliezer)?

comment by Alicorn · 2012-09-04T21:10:25.323Z · LW(p) · GW(p)

Too bad Martian words sound exactly like lethal sonic weapons and your original X that your wish is about doesn't, strictly speaking, require resurrecting you to enjoy it.

Or, the genie doesn't have to respond to wishes that don't come out of Master's mouth.

comment by CCC · 2012-09-06T07:58:15.396Z · LW(p) · GW(p)

Text-to-speech device provided. It reads from the scroll with perfect accuracy and low speed. It will take a few hundred years to complete this task.

You will need to change the batteries once an hour; it you forget, it starts reading from the start of the scroll again. (And where do you get a large supply of size Q batteries, in any case?)

Replies from: Strange7
comment by Strange7 · 2012-09-14T02:21:04.125Z · LW(p) · GW(p)

I know some electrical engineers. It's not all that hard to rig up an uniniterruptible power supply that runs off line voltage. The delay is inconvenient, but for the right wish it's acceptable.

comment by Hawisher · 2012-09-16T05:12:50.037Z · LW(p) · GW(p)

"I wish for everything written on this scroll." Or some variation thereof that more exactly expresses that general idea.

Replies from: Alicorn
comment by Alicorn · 2012-09-16T05:46:10.122Z · LW(p) · GW(p)

All of the nouns named on the scroll appear. Some of them are things that the wording of the scroll expressly insists that the wish must avoid, due to their being lethal or otherwise undesirable.

Replies from: Hawisher
comment by Hawisher · 2012-09-16T17:11:41.030Z · LW(p) · GW(p)

"I wish for everything that would happen if I read this scroll perfectly."

Replies from: Alicorn, Kindly
comment by Alicorn · 2012-09-16T17:53:14.601Z · LW(p) · GW(p)

Among other things, you would suffocate due to that four-minute no-breathing-allowed Martian word in paragraph nine.

Replies from: Hawisher, wedrifid
comment by Hawisher · 2012-09-16T17:59:45.203Z · LW(p) · GW(p)

Ooooh. Well played.

comment by wedrifid · 2012-09-16T17:55:38.921Z · LW(p) · GW(p)

Among other things, you would suffocate due to that four-minute no-breathing-allowed Martian word in paragraph nine.

4 minutes is survivable if trained.

Replies from: Alicorn, army1987
comment by Alicorn · 2012-09-16T17:56:34.718Z · LW(p) · GW(p)

Fine, thirty.

comment by A1987dM (army1987) · 2012-09-16T18:38:23.211Z · LW(p) · GW(p)

Not while speaking.

comment by Kindly · 2012-09-16T17:27:59.934Z · LW(p) · GW(p)

In the likely event that it's impossible for you to read the scroll perfectly, it's true for all X that "X would happen if you read this scroll perfectly". Which means that anything the genie feels like doing satisfies that wish. Or possibly the genie has to make everything happen that could possibly happen. Neither of those seems like a good outcome.

Replies from: Hawisher
comment by Hawisher · 2012-09-16T17:57:33.228Z · LW(p) · GW(p)

Hm... how about "I wish to have all the skills and abilities required to formulate an unambiguous wish in standard English that would allow me to fulfill any of my non-contradictory desires that I choose, and to be able to choose which of any desires that are contradictory said wish would fulfill, and to be able to express that unambiguous wish in an unambiguous way in less than thirty seconds and with no consequences to incorrectly expressing that wish apart from the necessity of trying again to express it."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-04T21:17:05.923Z · LW(p) · GW(p)

The scroll modifies your expectations. The genie twist-interprets X, and then assesses your expectations of the result of the genie's interpretation of X. ("Why, that's just what you'd expect destroying the world to do! What are you complaining about?") The complete list of expectations regarding X is at least slightly self-contradictory, so of course the genie has no option except to modify your expectations directly...

Replies from: Armok_GoB, CG_Morton, army1987
comment by Armok_GoB · 2012-09-04T22:50:42.073Z · LW(p) · GW(p)

OOoh, is this now the "eliezer points out how your wish would go wrong" thread! I wanna play to! :p

"I wish for that which I'd wish for if I had an uninterrupted year of thinking about it and freely talking to a dedicated copy of Eliezer Yudovsky"

Replies from: Cyan, MichaelHoward, gjm, MichaelHoward
comment by Cyan · 2012-09-04T23:33:04.245Z · LW(p) · GW(p)

Uh oh...

Eliezer Yudkowsky:

Eliezer Yud_ov_sky:

comment by MichaelHoward · 2012-09-04T23:23:35.640Z · LW(p) · GW(p)

No sleep, or anything that would interrupt thinking about it, for a year, might lead to an interesting wish.

comment by gjm · 2012-09-04T23:17:09.628Z · LW(p) · GW(p)

Well, it's obvious what happens then: the genie lets a dedicated copy of Eliezer out of a box.

comment by MichaelHoward · 2012-09-04T23:20:45.423Z · LW(p) · GW(p)

an uninterrupted year of thinking about it

No sleep, or anything else that would mean not thinking about it, for a year. That

comment by CG_Morton · 2012-09-07T14:10:23.840Z · LW(p) · GW(p)

The genie is, after all, all-powerful, so there are any number of subtle changes it could make that you didn't specify against that would immediately make you, or someone else, wish for the world to be destroyed. If that's the genie's goal, you have no chance. Heck, if it can choose it's form it could probably appear as some psycho-linguistic anomaly that hits your retina just right to make you into a person who would wish to end the world.

Really I'm just giving the genie a chance to show me that it's a nice guy. If it's super evil I'm doomed regardless, but this wish test (hopefully) distinguishes between a benevolent genie and one that's going to just be a dick.

Replies from: kilobug
comment by kilobug · 2012-09-07T14:41:12.737Z · LW(p) · GW(p)

If you consider three class of genies :

  • (A) a genie that's going to be "just be a dick" but is not skilled at it ;

  • (B) a genie that is benevolent ;

  • (C) a genie that's going to be "just be a dick" but is very skilled at it.

Your test will (may at least) tell apart A from (B or C). It won't tell apart B from C.

The "there is no safe wish" rule applies to C. Sure, if your genie is not skilled a being "evil" (having an utility function very different from yours), you can craft a wish that is beyond the genie's ability to twist it. But if the genie is skilled, much more intelligent than you are, with like the ability to spend the equivalent of one million of years of thinking how to twist the wish in one second, he'll find a flaw and use it.

comment by A1987dM (army1987) · 2012-09-05T04:34:24.614Z · LW(p) · GW(p)

The scroll modifies your expectations.

"I wish for a paper containing the exact wording of a wish that, when spoken to you, would meet all my expectations as of September 3, 2012, for a wish granting X."

(Then, if my expectations yesterday did contain self-contradictions, the genie will do... whatever it did if I wished that 2 + 2 = 5.)

comment by kilobug · 2012-09-04T18:28:57.790Z · LW(p) · GW(p)

I'm pretty sure your belief network is not coherent enough so that it is possible to "meet all your expectations", there must be somewhere two expectations which you hold but which aren't, in fact, compatible. So the wish will fizzle ;)

Replies from: CG_Morton, faul_sname
comment by CG_Morton · 2012-09-07T14:01:34.611Z · LW(p) · GW(p)

A wish is a pretty constrained thing, for some wishes.

If I wish for a pile of gold, my expectations probably constrain lots of externalities like 'Nobody is hurt acquiring the gold, it isn't taken from somewhere else, it is simply generated and deposited at my feet, but not, like, crushing me, or using the molecules of my body as raw material, or really anything that kills me for that matter'. Mostly my expectations are about things that won't happen, not things that will happen that might conflict (that consists only of: the gold will appear before me and will be real and persistent).

If you try this with a wish for world peace, you're likely to get screwed. But I think that's a given no matter your strategy. Don't wish for goals, wish for the tools to achieve your goals - you'll probably be more satisfied with the result to boot.

Replies from: kilobug
comment by kilobug · 2012-09-07T14:29:16.946Z · LW(p) · GW(p)

You're already lowering your claim, it's not longer "for any value of X".

But even so...

"Nobody is hurt acquiring the gold" does that include people hurt because your sudden new gold decrease the market value of gold, so people owning stocks of gold or speculating on an increase of the gold price are hurt ? Sure, you can say "it's insignificant", but how will a genie tell that apart ? Your expectation of what having a sudden supply of gold on the market would do and the reality of how it'll unfold probably don't match. So the genie will have to do corrections for that... which will themselves have side-effects...

Also, you'll probably realize once you've some gold that gold doesn't bring you as much as you thought it would bring you (at least, it happens to most lottery winner), so even if you genuinely get the gold, it'll fail to "meet all your expectations" of having gold. Unless the genie also fixes you so you get as much utility/happiness/... from the gold as you expected to get from it. And as soon as the genie has to start fixing you... game over.

Replies from: CG_Morton
comment by CG_Morton · 2012-09-07T15:21:23.794Z · LW(p) · GW(p)

I simplify here because a lot of people think I will have contradictory expectations for a more complex event.

But I think you're being even more picky here. Do I -expect- that increasing the amount of gold in the world will slightly affect the market value? Yes. But I haven't wished anything related to that, my wish is -only- about some gold appearing in front of me.

Having the genie magically change how much utility I get from the gold is an even more ridiculous extension. If I wish for gold, why the heck would the genie feel it was his job to change my mental state to make me like gold more?

Possibly we just think very differently, and your 'expectation' of what would happen when gold appears also includes every thing you would do with that gold later, despite, among many, many things, not even knowing -when- you would speak the wish to get the gold, or what form it would appear in. And you even have in mind some specific level of happiness that you 'expect' to get from it. If so, you're right, this trick will not work for you.

Replies from: kilobug
comment by kilobug · 2012-09-07T15:36:40.790Z · LW(p) · GW(p)

If you wish for gold, it's because you have expectation on what you'll do with that gold. Maybe fuzzy ones, but if you didn't have any, you wouldn't wish for gold. So you can't dissociate the gold and the use when what you're speaking about is "expectations".

Or else, solutions like "the world is changed so that the precious metal is lead, and gold has low value, but all the rest is the same" would work. And that wouldn't meet your "expectations" about the wish at all.

comment by faul_sname · 2012-09-06T00:36:36.788Z · LW(p) · GW(p)

An altogether fairly harmless outcome.

comment by DanielLC · 2012-09-06T07:49:54.530Z · LW(p) · GW(p)

Couldn't you just wish that all your expectations for a wish granting X were granted, and take out the second step?

comment by TheOtherDave · 2012-09-04T17:30:49.907Z · LW(p) · GW(p)

This presumes, of course, that my expectations for a wish granting X, for some value of X, is such that having a wish granted that meets them is profitable.

comment by A1987dM (army1987) · 2012-09-29T10:03:14.420Z · LW(p) · GW(p)

gravity does not need policemen to make things fall!

-- Iain McKay et al., An Anarchist FAQ, Sec. F.2.1

comment by NancyLebovitz · 2012-09-24T14:35:06.813Z · LW(p) · GW(p)

“You define yourself by what offends you. You define yourself by what outrages you.”

Salman Rushdie, explaining identity politics

Replies from: chaosmosis
comment by chaosmosis · 2012-09-24T17:20:16.890Z · LW(p) · GW(p)

I think "identity politics" is a term of art which covers things other than that which aren't bad, like minority struggles.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-24T19:09:57.095Z · LW(p) · GW(p)

You've got a point, and it's one that gets into hard issues. It can be quite hard for some people to decide whether they're being unfairly mistreated and to act on it, and the people for whom the decision is easy aren't necessarily sensible. Emotions are not a reliable tool for telling whether acting on a feeling of being unfairly mistreated makes sense.

How do you tell to what extent is a particular instance of people feeling outraged them just getting worked up for the fun of it over something they should endure, and to what extent are they building up enough allies and emotional energy to deal with a problem which (by utilitarian standards?) needs to be dealt with?

Replies from: chaosmosis
comment by chaosmosis · 2012-09-24T22:24:59.673Z · LW(p) · GW(p)

I don't know how to tell legitimate movements from illegitimate ones, but the term of art "identity politics" refers to both. ID politics is a specific kind of political advocacy, and there are both good ID politics arguments and bad ones. You'd probably just have to investigate the claims they're making on a case by case basis.

But, I wasn't trying to interrogate whether defining yourself by outrage can be good in some instances, I was trying to point out that the term "ID politics" refers to things outside of defining yourself in relation to outrage. Maybe I just misinterpreted what you were saying, but I thought your comment unintentionally hinted that you were unaware the phrase is a specific term of art. There are many types of identity politics that aren't about outrage or opposition.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-24T23:45:16.182Z · LW(p) · GW(p)

You're quite right, I didn't know about it as a term of art.

I suppose I've mostly heard about the outrage variety of identity politics-- it tends to be more conspicuous.

comment by shminux · 2012-09-19T04:52:15.676Z · LW(p) · GW(p)

More from Scott Adams:

It turns out that the historical data is more like a Rorschach test. One economist can look at the data and see a bunny rabbit while another sees a giraffe. You and I haven't studied the raw data ourselves, and we probably aren't qualified anyway, so we are forced to make our decisions based on the credibility of economists. And seriously, who has less credibility than economists? Chiropractors and astrologists come close.

comment by allandong · 2012-09-14T19:30:46.904Z · LW(p) · GW(p)

Warning: Your milage may vary.

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." -George Bernard Shaw

Replies from: RobinZ
comment by RobinZ · 2012-09-15T03:25:12.132Z · LW(p) · GW(p)

Sadly, duplicate.

comment by mfb · 2012-09-12T19:35:22.188Z · LW(p) · GW(p)

All the world's major religions, with their emphasis on love, compassion, patience, tolerance, and forgiveness can and do promote inner values. But the reality of the world today is that grounding ethics in religion is no longer adequate. This is why I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether.

Tenzin Gyatso, 14. Dalai Lama

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-12T19:52:24.770Z · LW(p) · GW(p)

That's intriguing, but it also sounds like a case of non-apples.

Replies from: mfb
comment by mfb · 2012-09-15T14:09:41.718Z · LW(p) · GW(p)

Well, it is a necessary step to find other fruits.

comment by alex_zag_al · 2012-09-05T03:21:11.349Z · LW(p) · GW(p)

For authors are ordinarily so disposed that whenever their heedless credulity has led them to a decision on some controverted opinion, they always try to bring us over to the same side, with the subtlest arguments; if on the other hand they have been fortunate enough to discover something certain and evident, they never set it forth without wrapping it up in all sorts of complications. (I suppose they are afraid that a simple account may lessen the importance they gain by the discovery ; or perhaps they begrudge us the plain truth.)

Descartes, in Rules for the Direction of the Mind

Replies from: alex_zag_al
comment by alex_zag_al · 2012-09-05T03:21:50.007Z · LW(p) · GW(p)

A related Sherlock Holmes quote:

“Beyond the obvious facts that he has at some time done manual labour, that he takes snuff, that he is a Freemason, that he has been in China, and that he has done a considerable amount of writing lately, I can deduce nothing else.”

Mr. Jabez Wilson started up in his chair, with his forefinger upon the paper, but his eyes upon my companion.

“How, in the name of good-fortune, did you know all that, Mr. Holmes?” he asked. “How did you know, for example, that I did manual labour. It’s as true as gospel, for I began as a ship’s carpenter.”

“Your hands, my dear sir. Your right hand is quite a size larger than your left. You have worked with it, and the muscles are more developed.”

“Well, the snuff, then, and the Freemasonry?”

“I won’t insult your intelligence by telling you how I read that, especially as, rather against the strict rules of your order, you use an arc-and-compass breastpin.”

“Ah, of course, I forgot that. But the writing?”

“What else can be indicated by that right cuff so very shiny for five inches, and the left one with the smooth patch near the elbow where you rest it upon the desk?”

“Well, but China?”

“The fish that you have tattooed immediately above your right wrist could only have been done in China. I have made a small study of tattoo marks and have even contributed to the literature of the subject. That trick of staining the fishes’ scales of a delicate pink is quite peculiar to China. When, in addition, I see a Chinese coin hanging from your watch-chain, the matter becomes even more simple.”

Mr. Jabez Wilson laughed heavily. “Well, I never!” said he. “I thought at first that you had done something clever, but I see that there was nothing in it after all.”

“I begin to think, Watson,” said Holmes, “that I make a mistake in explaining. ‘Omne ignotum pro magnifico,’ you know, and my poor little reputation, such as it is, will suffer shipwreck if I am so candid."

That's from The Red-Headed League.

comment by roland · 2012-09-04T17:50:07.562Z · LW(p) · GW(p)

A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human. Playing a "let's pretend" game otherwise doesn't mean you win all arguments, or any. Even with the best intentions and knowledge about biases and rational thinking, you won't transcend and avoid the pitfalls of having a brain designed, in the words of Science of Discworld, to shout at monkeys in the next tree. This doesn't mean you shouldn't give it a damn good try and LessWrong gives it a better shot than most, but remember that you, yes you, are an idiot. -- rationalWiki

Replies from: Document
comment by Document · 2012-09-09T03:26:12.550Z · LW(p) · GW(p)

A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions.

I've been collecting examples of this (or something more general but similar) under the name "brain-as-rock fallacy", though there's probably a better and less ambiguous name.

Replies from: roland
comment by roland · 2012-09-12T17:10:40.770Z · LW(p) · GW(p)

I don't get what you are hinting at with "brain-as-rock", could you please explain?

Replies from: Document
comment by Document · 2012-09-13T03:22:19.277Z · LW(p) · GW(p)

"Brain as ideal decision-making engine unaffected by mere external or physical facts."

comment by Ezekiel · 2012-09-02T12:06:06.949Z · LW(p) · GW(p)

Open question: Do you care about what (your current brain predicts) your transhuman self would want?

Replies from: Eliezer_Yudkowsky, fiddlemath
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-03T05:22:24.713Z · LW(p) · GW(p)

If you don't, you're really going to regret it in a million years.

Replies from: wedrifid, Ezekiel
comment by wedrifid · 2012-09-03T07:48:10.208Z · LW(p) · GW(p)

If you don't, you're really going to regret it in a million years.

I'm rather skeptical about that, even conditioning on Ezekiel being around to care. I expect that the difference between him having his current preferences and his current preferences+more caring about future preferences will not result in a significant difference in the outcome the future Ezekiel will experience.

comment by Ezekiel · 2012-09-03T05:50:51.494Z · LW(p) · GW(p)

The chance of human augmentation reaching that level within my lifespan (or even within my someone's-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you're so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years' time?

More seriously: Seeing as my current brain finds regret unpleasant, that's something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I'm not so sure - even if I knew I were going to undergo augmentation.

comment by fiddlemath · 2012-09-02T22:48:28.746Z · LW(p) · GW(p)

Yes, I think so. It surely depends on exactly how I extrapolate to my "transhuman self," but I suspect that its goals will be like my own goals, writ larger

comment by Bruno_Coelho · 2012-09-03T00:15:45.631Z · LW(p) · GW(p)

There is not a man living whom it would so little become to speak from memory as myself, for I have scarcely any at all, and do not think that the world has another so marvellously treacherous as mine.

-- Motaigne

comment by katydee · 2012-09-02T09:11:35.130Z · LW(p) · GW(p)

It's not easy to learn a new language. We are all used to speaking in a vague verbal language when expressing degrees of belief. In daily life, this language serves us quite well and the damage caused by its ambiguity is minor, but for important decisions it is helpful to use numbers to express degrees of belief. It may be more difficult to elicit numbers, but it is much more efficient. We understand each other better, numerical expressions are more sensitive to small differences in our feelings, and in the end, our decision processes will be better.

From "An Elementary Approach to Thinking Under Uncertainty," by Ruth Beyth-Marom, Shlomith Dekel, Ruth Gombo, & Moshe Shaked.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-09-02T19:12:41.692Z · LW(p) · GW(p)

Or not.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-09-03T09:41:21.913Z · LW(p) · GW(p)

Arguably, assigning a particular floating point number between 0.0 and 1.0 to represent subjective degrees of belief is a specialized skill and it could take years of practice in order to become fluent in numerical-probability-speak.* Another possibility is that it merely adds a kind of pseudo-precision without any benefit over natural language.

In any case, it seems to be an empirical question and so should be answered with empirical data. I guess we won't really know until we have a good-sized number of people using things such as PredictionBook for extended periods of time. I'll keep you posted.

*There does exist rigorously defined verbal probabilities, but as far as I know they haven't been used much since the Late Middle Ages/Early Modern Period.

Replies from: gwern
comment by gwern · 2012-10-01T00:53:51.831Z · LW(p) · GW(p)

I'd like to see more on those verbal probabilities, having stated to use my own since few satisfactory existing versions.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-10-02T11:23:40.420Z · LW(p) · GW(p)

Potest legistis linguam Latinam? If not, then you might want to read The Science of Conjecture: Evidence and Probability before Pascal by James Franklin for an overview of the tradition I was referring to.

Replies from: gwern
comment by gwern · 2012-10-02T14:23:46.515Z · LW(p) · GW(p)

I'll give it a look.

comment by chaosmosis · 2012-09-24T17:23:01.857Z · LW(p) · GW(p)

When one considers how ready are the forces of young men for discharge, one does not wonder at seeing them decide so uncritically and with so little selection for this or that cause: that which attracts them is the sight of eagerness for a cause, as it were the sight of the burning match not the cause itself. The more ingenious seducers on that account operate by holding out the prospect of an explosion to such persons, and do not urge their cause by means of reasons; these powder-barrels are not won over by means of reasons!

Nietzsche, The Gay Science

comment by lukeprog · 2012-09-09T01:29:05.587Z · LW(p) · GW(p)

[oops; this was a repeat]

Replies from: Alejandro1
comment by [deleted] · 2012-09-02T18:19:19.202Z · LW(p) · GW(p)

.

Replies from: simplicio
comment by simplicio · 2012-09-02T19:32:28.310Z · LW(p) · GW(p)

Can you elaborate on what this is getting at?

Replies from: None, RomanDavis
comment by [deleted] · 2012-09-02T19:53:32.748Z · LW(p) · GW(p)

.

comment by RomanDavis · 2012-09-02T19:47:40.058Z · LW(p) · GW(p)

You shouldn't be decieved by the use of the word formal as an applause light [? · GW].

comment by jsbennett86 · 2013-02-18T10:37:23.405Z · LW(p) · GW(p)

The best way to have a good idea is to have lots of ideas.

Linus Pauling

Edit: another one captured by an old thread!

Replies from: jsbennett86
comment by jsbennett86 · 2013-02-18T10:38:47.613Z · LW(p) · GW(p)

From the alt-text in the above-linked comic:

Corollary: The most prolific people in the world suck 99% of the time.

comment by chaosmosis · 2012-11-01T23:10:27.870Z · LW(p) · GW(p)

We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful. Our first questions about the value of a book, of a human being, or a musical composition are: Can they walk? Even more, can they dance?

Nietzsche, The Gay Science

comment by [deleted] · 2012-09-16T08:41:06.632Z · LW(p) · GW(p)

From my purist position, everything scientists say, qua scientists, can only be true or false or somewhere in between. No other criteria besides the truth should matter or be applied in evaluating scientific theories or conclusions. They cannot be “racist” or “sexist” or “reactionary” or “offensive” or any other adjective. Even if they are labeled as such, it doesn’t matter. Calling scientific theories “offensive” is like calling them “obese”; it just doesn’t make sense. Many of my own scientific theories and conclusions are deeply offensive to me, but I suspect they are at least partially true.

Once scientists begin to worry about anything other than the truth and ask themselves “Might this conclusion or finding be potentially offensive to someone?”, then self-censorship sets in, and they become tempted to shade the truth. What if a scientific conclusion is both offensive and true? What is a scientist to do then? I believe that many scientific truths are highly offensive to most of us, but I also believe that scientists must pursue them at any cost.

--Satoshi Kanazawa

Replies from: MixedNuts
comment by MixedNuts · 2012-09-16T09:14:20.750Z · LW(p) · GW(p)

That would be much more convincing coming from literally anyone other than Kanazawa. It takes very little charity to interpret his critics as saying, not "Your theories are inherently racist" but "Your theories are only some of many compatible with your findings; you are privileging them because you are biased in favor of hypotheses that postulate certain races naturally do worse than others".

I don't know what to learn from the quote. It's literally true, but it's also clearly unhelpful, since Kanazawa writes this while following non-truth-seeking algorithms. Maybe the moral is "If someone calls you a mean name, address the content of the criticism and not whether the mean name applies", or maybe "Don't be a giant flaming hypocrite".

Replies from: None
comment by [deleted] · 2012-09-16T09:48:45.023Z · LW(p) · GW(p)

That would be much more convincing coming from literally anyone other than Kanazawa.

He isn't a great scientist in my mind since he seems to often just lazily reverse stupidity, but it was a good quote.

comment by juliawise · 2012-09-11T18:40:51.792Z · LW(p) · GW(p)

This is my home, the country where my heart is;

Here are my hopes, my dreams, my sacred shrine.

But other hearts in other lands are beating,

With hopes and dreams as true and high as mine.

My country’s skies are bluer than the ocean,

And sunlight beams on cloverleaf and pine.

But other lands have sunlight too and clover,

And skies are everywhere as blue as mine.

-Lloyd Stone

comment by NancyLebovitz · 2012-09-05T15:41:51.495Z · LW(p) · GW(p)

How about wishing for enough judgment to make wishes that you won't regret? Additional clause: the genie isn't allowed to deteriorate your mental capacities below their current level.

comment by J_Taylor · 2012-09-04T03:18:33.561Z · LW(p) · GW(p)

To learn who rules over you, simply find out who you are not allowed to criticize.

  • Attributed to Voltaire
comment by J_Taylor · 2012-09-02T03:43:55.476Z · LW(p) · GW(p)

Not everything that is more difficult is more meritorious.

St. Thomas Aquinas

Edit: Oops, accidentally created a repost.

comment by Matt_Caulfield · 2012-09-02T00:46:45.548Z · LW(p) · GW(p)

I can't tell you what it really is, I can only tell you what it feels like.

  • Eminem, "Love The Way You Lie"
comment by chaosmosis · 2012-09-24T17:23:28.380Z · LW(p) · GW(p)

Whatever its political, pedagogical, cultural content, the plan is always to get some meaning across, to keep the masses within reason; an imperative to produce meaning that takes the form of the constantly repeated imperative to moralise information to better inform, to better socialize, to raise the cultural level of the masses, etc. Nonsense: the masses scandalously resist this imperative of rational communication. They are giving meaning: they want spectacle.

Baudrillard, In the Shadow of Silent Majorities

Replies from: chaosmosis
comment by chaosmosis · 2012-09-27T16:04:05.711Z · LW(p) · GW(p)

I was curious why the Baudrillard comment was downvoted when it expresses the same idea as the Nietzsche comment, it just uses a different style and approaches the problem from a different direction. Ideas, anyone?

Replies from: None, bogus
comment by [deleted] · 2012-09-27T16:34:15.647Z · LW(p) · GW(p)

Priming? Beaudrillard is associated with humanities, pomo and academic philosophy; Nietzche is associated with atheism, contrarianism and the idea of the ubermensch. The comment doesn't seem to be very strongly downvoted; possibly you're just dealing with detractors here (I daresay LW has more fans of the latter than of the former).

Replies from: chaosmosis
comment by chaosmosis · 2012-09-27T16:45:25.842Z · LW(p) · GW(p)

This was roughly my thought as well. I thought there might also have been more substantive differences though and I was curious what those might be. The only thing I could see is that Baudrillard's quote had a tone that's more critical of the masses and the way they do politics, and that Baudrillard's quote could be misread as an injunction to stop trying to make people rational (which it's not).

comment by bogus · 2012-09-27T23:38:28.078Z · LW(p) · GW(p)

Well, I'm not even sure whether Boudrillard's quote is grammatically well-formed, so there's that. Then again, postmodernist texts tend to be imbued with near-poetical and mystical qualities. Much like Zen koans, they're more about exemplifying a particular mind-posture and way of thinking than they are about straightforward argumentation. I think it's unfair to expect LessWrongers to be familiar with such texts.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-28T02:57:16.804Z · LW(p) · GW(p)

Oh. So this quote is difficult to read, then? More difficult than the Nietzsche one? I guess inferential gaps must be coming into play here. I'm having a difficult time trying to not-understand it, trying to emphasize with your viewpoint. I'm having a difficult time believing that you couldn't understand the quote, honestly.

I feel like you're generalizing too much about post modernism. I like lots of it, and don't think that it's mystical oriented. I would say rather that it packs a lot of information into a small amount of words through the clever use of words and through recurring concepts and subtle variations on those concepts.

Post modernism can be difficult to understand, but I don't think it is in this case, and I think that it's complexity is justified. Scientists use obscure terminology, but for a good purpose, generally. Some scientists use obscure terminology to hide the flaws in their ideas. I view post modern criticisms in almost exactly the same way - their complexity can be for both good and bad.

Also, Baudrillard is French. It might not be his fault if there's problems with the translated text.

Replies from: bogus, MixedNuts
comment by bogus · 2012-09-28T03:11:55.334Z · LW(p) · GW(p)

I'm using "mystical" in a rather specialized sense, actually. What I mean is that postmodernist texts seem to eschew straightforward arguments - instead they use rhetorical and poetical patterns in a functional way, to inspire a specific mental stance in the reader. This mental stance might be quite simply described as "emptying the teacup", i.e. questioning and letting go of the "cached thoughts" which comprise one's current understanding of reality and culture. This mental stance happens to be remarkably useful in textual criticism and social science, where one often has to come to terms with (and perhaps reconstruct, at least partially) cultures which are far apart from one's own, so that a "filled cup" would be a significant hindrance.

Oh, and yes, I had quite a bit of trouble with trying to understand the Beaudrillard quote, although I did grok the gist of it, and I also got the similarity wrt. the Nietzsche one. But I'd say that grammar is clearer in Nietzsche's quote, and even his rhetoric seems more direct and to the point here.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-28T03:34:38.385Z · LW(p) · GW(p)

Okay, gotcha. Thanks.

comment by MixedNuts · 2012-09-28T09:35:49.844Z · LW(p) · GW(p)

Nope, I'm a native French speaker and my reaction to Baudrillard is "WTF?" and building a Markov Baudrillard quote generator to see if I can tell the difference.

Jargon is good. Vaguely defined jargon isn't bad - sometimes all you can do is say "sweet refers to the taste of sugar, if you don't know what that is I can't help you".

But structure shouldn't be completely unclear. Baudrillard has a lot of "X is Y" statements and very few "therefore"s. I can't tell what is a conclusion, what is an argument, what is a definition, or even whether there are anything but conclusions.

I've found some Baudrillard texts that clearly mean things, but they're not very good.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-28T13:04:47.677Z · LW(p) · GW(p)

Can you specify more about what parts of the quote are confusing?

Replies from: MixedNuts
comment by MixedNuts · 2012-09-28T21:38:20.115Z · LW(p) · GW(p)

This one isn't that bad. (For utter, words-don't-work-that-way confusion, see Debord. Or good ol' Hegel.)

Whatever its political, pedagogical, cultural content, the plan is always to get some meaning across,

That bit is straightforward.

to keep the masses within reason;

"The masses" has a standard denotation but various connotations. Freddy Nietzsche talks about enthusiastic young people, which is more specific.

What's "to keep within reason"? What this evokes is talking someone down, preventing outbursts. Applied to the masses, does he mean control - propaganda, opiate of the masses? The context suggests the opposite: to present a logical argument and try to convince audiences with it as the core of communication, more important than ethos and pathos and Cheetos.

an imperative to produce meaning that takes the form of the constantly repeated imperative to moralise information to better inform, to better socialize, to raise the cultural level of the masses, etc.

What?

an imperative to produce meaning that takes the form of the constantly repeated imperative

Okay, "imperative" seems to mean what social justice types can "enforcement by shaming". If you don't talk like a Vulcan, whoever is producing those great media reform plans (pretentious elites?) will shame you.

to moralise information

Okay, so media becomes morally loaded: information good, fluff bad. Much like food is morally loaded: vegetables good, fat bad.

to better inform, to better socialize, to raise the cultural level of the masses, etc.

Examples! Hallelujah, hosanna in excelsis! So the media reformers want to make people better. If you say a thing and hearing it doesn't make listeners better, you're selling junk food.

Nonsense: the masses scandalously resist this imperative of rational communication.

That seems pretty clear too: logical arguments aren't what convinces people. Nietzsche says that too, but in a more specific context: recruiting for a cause.

They are giving meaning:

I assume this means: "the masses decide what they want to take from what they hear, and it's not logical argument, it's"

they want spectacle.

I'll grant that "spectacle" is a totally precise and useful term of art that people clearly define whenever I'm out of earshot. But if he's saying what Fred says, he doesn't need the jargon; it's not a rare concept.

Freddypants is saying "If you want a young, energetic, status-seeking enthusiast to be enthusiastic about your cause, don't bother calmly explaining why your cause is good. Instead, make it look awesome and promise exciting heroics.". (Which he what he does in Zarathustra, and it worked on me but I already agreed.) Baudrillard appears to be saying "If you want to convince people, calm explanations won't work.".

Replies from: chaosmosis
comment by chaosmosis · 2012-09-28T22:18:14.779Z · LW(p) · GW(p)

Okay, thank you.

I agree that Hegel is ridiculously opaque, too.

comment by Delta · 2012-09-05T13:45:55.814Z · LW(p) · GW(p)

“The world is just a word for the things you value around you, right? That’s something I’ve had since I was born. If you tell me to rule such a world, I already rule it.” – Tohsaka Rin (Fate: stay night) on not taking over the world.

I think it is having a small core of things and people you value that keeps you grounded and healthy. Our "Something to Protect" if you like. Without that investment and connection to things that matter it's easy to lose your way.

Replies from: Rhwawn
comment by Rhwawn · 2012-09-05T15:37:56.728Z · LW(p) · GW(p)

No, that's never how I've seen anyone define 'world'. Maybe that quote makes more sense in context.

Replies from: Delta
comment by Delta · 2012-09-05T16:04:25.362Z · LW(p) · GW(p)

The character was just asked whether they would wish to conquer the world if given a wish-granting machine (and are saying no, they already have what they want and value). The way I understood the quote was that when people talk about ruling the world they really just want to control and protect the things they value around them. It made me think that "the world" isn't really a concept that people can easily grasp in the abstract, they need to look at the smaller scale to give them context.

I think "I want to protect humanity" or "I want to save the world" carry more weight and are easier to follow through on if you come at them from the angle of "I want to protect people like the people around me I love" or "I want to save the place where people like my friends and family live".

Replies from: Rhwawn, Document
comment by Rhwawn · 2012-10-29T22:15:54.415Z · LW(p) · GW(p)

I'm not sure I follow even with that explanation, but I've never really known what to make of the Nasuverse in the first place. ("This is so awesome!" "But also incredibly stupid." "But awesome!" "But stupid. And ad hoc. And ill-thought-out." "Aw, don't be like that, just enjoy the Rule of Cool.")

comment by Document · 2012-09-09T18:48:15.179Z · LW(p) · GW(p)

The character was just asked whether they would wish to conquer the world if given a wish-granting machine (and are saying no, they already have what they want and value).

I imagine Eliezer would answer something like "No, that would be redundant." (Edit: not to credit Eliezer with inventing the concept.)

comment by [deleted] · 2012-11-14T18:12:46.618Z · LW(p) · GW(p)

Two, you can use this corpus to conduct a very interesting exercise: you can triangulate. This is an essential skill in defensive historiography. If you like UR, you like defensive historiography.

Historiographic triangulation is the art of taking two or more opposing positions from the past, and using hindsight to decide who was right and who was wrong. The simplest way to play the game is to imagine that the opponents in the debate were reanimated in 2008, informed of present conditions, and reunited for a friendly panel discussion. I'm afraid often the only conceivable result is that one side simply surrenders to the other.

--Mencius Moldbug on an experiment that has interesting results

comment by [deleted] · 2012-11-14T18:00:16.353Z · LW(p) · GW(p)

"Pope Paul VI made four predictions about the effects of artificial birth control: it would lower standards of morality, it would make men disrespect women, it would make infidelity more common, and governments would start shoving them down everyone’s throats. That’s four for four. Once again, what the cool people promised did not happen while what the bigots prophesied came to pass."

--Some dude in the comment section of West Hunters

I need to hunt down the source for his claim so take it with a grain of salt.

comment by aqace · 2012-09-11T20:52:01.573Z · LW(p) · GW(p)

Memories can be vile, repulsive little brutes. Like children, I suppose. haha.

But can we live without them? Memories are what our reason is based upon, if we can't face them, we deny reason itself! Although, why not? We aren't contractually tied down to rationality!

There is no sanity clause!

-The Joker, A Killing Joke

Replies from: MixedNuts
comment by MixedNuts · 2012-09-11T21:11:53.789Z · LW(p) · GW(p)

Obsoleted by sticky notes.

comment by augustuscaesar · 2012-09-05T03:00:58.107Z · LW(p) · GW(p)

"A mind to which the stern character of an armchair is more immediately apparent than its use or its position in the room, is over-sensitive to expressive forms. It grasps analogies that a riper experience would reject as absurd. It fuses sensa that practical thinking must keep apart. Yet it is just this crazy play of associations, this uncritical fusion of impressions, that exercises the powers of symbolic transformation."

Susanne Langer, Philosophy in a New Key

Replies from: RobinZ
comment by RobinZ · 2012-09-05T03:04:03.933Z · LW(p) · GW(p)

I don't understand this. What are "the powers of symbolic transformation"?

Replies from: augustuscaesar
comment by augustuscaesar · 2012-09-05T03:17:22.612Z · LW(p) · GW(p)

To finish it: "To project feelings into outer objects is the first way of symbolizing, and thus of conceiving those feelings. This activity belongs to about the earliest period of childhood that memory can recover. The conception of 'self,' which is usually thought to mark the beginning of actual memory, may possibly depend on this process of symbolically epitomizing our feelings."

Replies from: RobinZ
comment by RobinZ · 2012-09-05T19:55:59.342Z · LW(p) · GW(p)

I'm afraid this simply leaves me more confused - could you explain what Susanne Langer is getting at in your own words?

Replies from: augustuscaesar
comment by augustuscaesar · 2012-09-06T03:04:24.648Z · LW(p) · GW(p)

The book is from 1942, so is dated in thought and is linguistically frilly.

What she was getting at here, I think, is that when we think of a baby learning to speak, and it is learning the word "mama." Probably most of the time it will come to speak the word "mama" were times that before were purely emotional. That is mother has a certain emotional valence as an object and at specific times will have even more emotional significance to the baby. Langer is exploring how language takes over for those feelings and will sit on top of those feelings (and goes on to shape and reshape and limit those feelings). "Mama" comes to represent mother, which before the word becomes accessible was some kind of blind object-feeling-sense data bundle. She claims earlier that the baby's ~"thoughts" are very synaesthetic, sense-data is muddled, something she takes from the James view.

"Powers of symbolic transformation" just means the ability to use language, but this she claims is something that takes time. When we come to be skillful masters of language we have long separated the emotional raw feelings from the word-object relationship in itself, though our words are always very much tinged by those earlier feelings (she presents a high brow picture where only the thoroughly educated reach the best forms of these states, i.e. philosophers).

Her point: "Speech is through and through symbolic; and only sometimes signific."

Speech (and a good "mind") is for abstract manipulation of concepts that help us understand the world, and to accomplish this we must painstaking remove emotions and feelings that attach to our words, e.g. the sternness of chairs.

Replies from: RobinZ
comment by RobinZ · 2012-09-06T05:23:42.620Z · LW(p) · GW(p)

It sounds somewhat related to the idea that words are based less on definitions than prototypes - yes, penguins, ostriches, and hummingbirds are all birds taxonomically, but the word "bird" evokes ducks, sparrows, and eagles much more easily than those.

comment by [deleted] · 2012-09-01T21:06:54.513Z · LW(p) · GW(p)

.

Replies from: simplicio
comment by simplicio · 2012-09-04T11:59:59.761Z · LW(p) · GW(p)

He is probably referring to nuclear fission reactors and/or genetically modified crops.

comment by Will_Newsome · 2012-09-01T10:41:41.835Z · LW(p) · GW(p)

The sage who is not an anvil: a conventional sentence and nothing more. By which I mean dead.

— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.

comment by shminux · 2012-09-04T21:17:49.869Z · LW(p) · GW(p)

I share her [Ayn Rand's] atheism but not her aggressive rejection of all that Christ taught.

Andrew Tobias (Warning: he is a DNC treasurer, so by no means rational in the matters of politics, only finance.)

Replies from: None
comment by [deleted] · 2012-09-04T22:04:27.686Z · LW(p) · GW(p)

Well, probably not even rational there.

comment by [deleted] · 2012-11-14T17:57:46.110Z · LW(p) · GW(p)

Over a half century ago, while I was still a child, I recall hearing a number of old people offer the following explanation for the great disasters that had befallen Russia: "Men have forgotten God; that's why all this has happened." Since then I have spent well-nigh 50 years working on the history of our revolution; in the process I have read hundreds of books, collected hundreds of personal testimonies, and have already contributed eight volumes of my own toward the effort of clearing away the rubble left by that upheaval. But if I were asked today to formulate as concisely as possible the main cause of the ruinous revolution that swallowed up some 60 million of our people, I could not put it more accurately than to repeat: "Men have forgotten God; that's why all this has happened."

--Aleksandr Solzhenitsyn

Edit: Ooops old thread.

Replies from: Grif, None
comment by Grif · 2012-11-14T18:20:14.417Z · LW(p) · GW(p)

The potential for abuse of this quote is too high. While it's an example of how even absurd amounts of research can fail to move a religious thought, too many people will fail to get the joke.

comment by [deleted] · 2012-11-14T18:15:52.104Z · LW(p) · GW(p)

i'm not sure what the point is because nobody's going to magically reinvent what you mean by "god"

--"um" on me posting that quote

comment by lukeprog · 2012-09-16T08:48:32.823Z · LW(p) · GW(p)

What you understand, you can command, and that is power enough to walk on the Moon.

Harry Potter, in Harry Potter and the Methods of Rationality by Eliezer Yudkowsky

Replies from: David_Gerard, DanArmak
comment by David_Gerard · 2012-09-16T09:08:01.730Z · LW(p) · GW(p)

That arguably counts as LW/OB.

Replies from: lukeprog, wedrifid
comment by lukeprog · 2012-09-16T09:21:43.165Z · LW(p) · GW(p)

If HPMoR isn't allowed, that should be specified in the rules.

Replies from: David_Gerard
comment by David_Gerard · 2012-09-17T12:12:20.600Z · LW(p) · GW(p)

I mean that it's a nice quote, but I suspect that's the reason for the downvotes.

Replies from: wedrifid
comment by wedrifid · 2012-09-17T16:12:23.074Z · LW(p) · GW(p)

I'm not overly impressed with the quote either. Sometimes you can understand things and still not command them. Sometimes you just lose and all understanding you can get will just tell you to go do something else that you can control.

comment by wedrifid · 2012-09-16T10:13:52.315Z · LW(p) · GW(p)

That arguably counts as LW/OB.

It is arguably a lot more affiliated to LW than OB is. (We successfully got OB removed from the no-quote list at some stage. Unfortunately someone reverted it.)

comment by DanArmak · 2012-09-29T19:33:54.853Z · LW(p) · GW(p)

What you understand, you can command, and that is power enough to walk on the Moon.

I understand people. Imperius! People, I command you to build me a moon rocket!

-- Draco

Replies from: faul_sname
comment by faul_sname · 2012-10-01T01:31:20.695Z · LW(p) · GW(p)

...That might actually work, as long as he understands which people to Imperius.

comment by woodside · 2012-09-05T00:37:13.522Z · LW(p) · GW(p)

If you are not lost, then you're at a place someone has already found... What's the use of being in mapped territory?

  • Junot Diaz

A different perspective on a phrase our community holds near and dear

Replies from: gwern, Document
comment by gwern · 2012-09-09T16:52:05.815Z · LW(p) · GW(p)

If that's true, then there could be no use in finding a place because you would then follow the quote's advice and never return again!

Per Bohr's advice we can identify this as a meaningless 'profound truth' by reversing it:

If you are lost, then you are at a place no one has found before... What's the use of being in unmapped territory?

Replies from: woodside
comment by woodside · 2012-09-10T21:03:34.684Z · LW(p) · GW(p)

I took the quote as a call to explore. Don't just be satisfied with learning things other people have figured out, try to creatively venture into the unknown yourself.

Replies from: gwern
comment by Document · 2012-09-09T03:30:33.840Z · LW(p) · GW(p)

If you are on Earth, then you're at a place someone has already found.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-09T10:09:55.368Z · LW(p) · GW(p)

I take Junot Diaz to mean “place” metaphorically, not geographically.

comment by Aurora · 2012-09-09T01:40:49.756Z · LW(p) · GW(p)

Love is experienceable, not intelligible.

  • Anonymous.
comment by Jayson_Virissimo · 2012-09-03T10:45:55.158Z · LW(p) · GW(p)

If you can't criticise, you can't optimise.

-Harry James Potter-Evans-Verres, Harry Potter and the Methods of Rationality

Replies from: tgb, wedrifid
comment by tgb · 2012-09-03T12:28:48.569Z · LW(p) · GW(p)

Downvoted since HPMoR is or should be included in the "Do not quote comments/posts on LW/OB" rule.

Replies from: lukeprog
comment by lukeprog · 2012-09-04T10:26:55.139Z · LW(p) · GW(p)

But it's not.

Kinda mean to downvote somebody for breaking a rule that doesn't exist (yet), don't you think?

Replies from: tgb, wedrifid
comment by tgb · 2012-09-04T15:24:45.664Z · LW(p) · GW(p)

No mean-ness intended, and it's only one downvoted. If we let HPMoR quotes in here we'd be spending this whole thread reading HPMoR which isn't particularly helpful.

I was also under the impression that it had been at some previous point specifically mentioned in the rules. And considering HPMoR as a part of LW is not unreasonable: you yourself did that here.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-09-04T23:06:12.669Z · LW(p) · GW(p)

Agreed. Retracted.

comment by wedrifid · 2012-09-04T11:12:29.615Z · LW(p) · GW(p)

Kinda mean to downvote somebody for breaking a rule that don't exist (yet), don't you think?

Not especially. It's using downvotes exactly as intended. It would be fine even if there was an explicit endorsement of HPMoR quotes as being sufficiently external to qualify for the thread, if tgb just happened personally not to like them.

Incidentally, I down-voted a quote of Eliezer from TDT a week ago and just didn't think it was worth making a comment about it. Really, Eliezer making one of his essays slightly longer than the others and publishing it in a very slightly more formal form doesn't change the nature or role significantly. (Mind you, I would likely have ignored it or upvoted if the specific quote was sufficiently impressive to make up for it.)

EDIT: It could actually be mean, denotatively but if so it does not have the features that some other mean things do that make them not an acceptable and sometimes virtuous thing to do.

comment by wedrifid · 2012-09-03T12:11:35.392Z · LW(p) · GW(p)

False. Criticism and optimization overlap highly in the nature of the reasoning but it is possible to construct an optimizer that is not capable of criticism. If you really want one. I'm not sure what exactly Harry's intended human-level practical point is---some applications apply even though the claim is technically false but some may not.

Replies from: bsm
comment by bsm · 2012-09-03T14:18:57.193Z · LW(p) · GW(p)

I believe his human-level point is that if you are unable to find a problem with a current system, you will believe optimising it is impossible.

comment by Will_Newsome · 2012-09-01T10:19:20.518Z · LW(p) · GW(p)

Nerevar said, 'I am afraid to become slipshod in my thinking.'
Vivec said, 'Reach heaven by violence then.'

— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.

Replies from: Vaniver
comment by Vaniver · 2012-09-01T17:18:20.020Z · LW(p) · GW(p)

Note for the unfamiliar: this exchange occurs in the Temple of False Thinking.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-01T17:21:07.931Z · LW(p) · GW(p)

You know too much.

Replies from: Vaniver
comment by Vaniver · 2012-09-01T17:33:11.585Z · LW(p) · GW(p)

Now I am the mightiest of your children?

comment by Will_Newsome · 2012-09-01T10:22:22.856Z · LW(p) · GW(p)

The last time I heard his voice, showing the slightest sign of impatience, I learned to control myself and submit to the will of others. Afterwards, I dared to take on the sacred fire and realized there was no equilibrium with the ET'ADA. They were liars, lost roots, and the most I can do is to be an interpreter into the rational. Even that fails the needs of the people. I sit on the mercy seat and pass judgment, the waking state, and the phase aspect of the innate urge. Only here can I doubt, in this book, written in water, broadened to include evil.
Then Vivec threw his ink on this passage to cover it up (for the lay reader) and wrote instead:
Find me in the blackened paper, unarmored, in final scenery. Truth is like my husband: instructed to smash, filled with procedure and noise, hammering, weighty, heaviness made schematic, lessons learned only by a mace. Let those that hear me then be buffeted, and let some die in the ash from the striking. Let those that find him find him murdered by illumination, pummeled like a traitorous house, because, if an hour is golden, then immortal I am a secret code. I am the partaker of the Doom Drum, chosen of all those that dwell in the middle world to wear this crown, which reverberates with truth, and I am the mangling messiah.

— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.

Replies from: CronoDAS
comment by CronoDAS · 2012-09-02T23:51:46.092Z · LW(p) · GW(p)

I don't get it.

comment by roland · 2012-09-04T00:40:37.020Z · LW(p) · GW(p)

Thinking is an act, feeling is a fact. Don't bother comprehending, living is far beyond any comprehension...

-- Clarice Lispector.

Original in Brazilian portuguese:

Pensar é um ato, sentir é um fato. Não se preocupe em entender, viver ultrapassa qualquer entendimento...

Before you downvote: I think this has a lot to do with rationality because we tend to be caught up in thinking and the models about the world we create in our minds, actually science is about this. But those models have limitations and are often wrong as the history of science shows time and again.

EDIT: added the originals. Fixed typo.

Replies from: VKS
comment by VKS · 2012-09-04T01:44:09.062Z · LW(p) · GW(p)

... we tend to be caught up in thinking and the models about the world we create in our minds, actually science is about this. But those models have limitations and are often wrong as the history of science shows time and again.

Now that you have noticed this, what are you going to do with it?

Replies from: roland
comment by roland · 2012-10-01T18:56:55.046Z · LW(p) · GW(p)

Realize that your mental models might be wrong and don't put too much weight on them, instead put more weight on your feelings.

Replies from: TimS, VKS
comment by TimS · 2012-10-01T19:02:41.831Z · LW(p) · GW(p)

Trying to make better models does not appeal to you?

comment by VKS · 2012-10-02T18:42:03.964Z · LW(p) · GW(p)

Do you have good evidence that your feelings are more often correct than your models?

Replies from: roland
comment by roland · 2012-10-29T19:52:30.002Z · LW(p) · GW(p)

Feelings honed by millions of years of evolution.

Replies from: VKS
comment by VKS · 2012-11-01T22:53:27.413Z · LW(p) · GW(p)

To what extent can you expect evolution to have prepared you for your day-to-day experience?

Replies from: roland
comment by roland · 2012-11-02T06:21:03.337Z · LW(p) · GW(p)

Is this a serious question? While the modern world might have changed in a lot of aspects a big factor remains constant: people, social interactions. What use is it to choose the logically correct decision if it still makes us feel miserable?

Replies from: Richard_Kennaway, VKS
comment by Richard_Kennaway · 2012-11-02T14:45:39.044Z · LW(p) · GW(p)

What use is it to feel miserable despite having made a correct decision?

Replies from: roland
comment by roland · 2012-11-02T16:20:43.414Z · LW(p) · GW(p)

It might be hard to change our feelings. Should a correct decision make us feel miserable? Maybe there is a better decision that also makes us feel good? Also relevant see my answer here: http://lesswrong.com/lw/ece/rationality_quotes_september_2012/7qmo

comment by VKS · 2012-11-02T14:16:43.150Z · LW(p) · GW(p)

There are situations where your feelings are more reliable than your models. Are there situations where it is the other way around? How do you decide which to use?

Replies from: roland
comment by roland · 2012-11-02T16:18:19.619Z · LW(p) · GW(p)

I don't intended the original quote to be an admonition against all use of models/reasoning. My point was more or less along the lines of "listen to your feelings, they might be telling you something important. Don't disregard them just because you have some neat model, your model could be wrong."

Replies from: VKS
comment by VKS · 2012-11-02T17:38:10.511Z · LW(p) · GW(p)

I agree, but that does not answer the question. How do you decide which to use? What do you need in order to decide?

Replies from: roland, chaosmosis
comment by roland · 2012-11-02T18:23:31.876Z · LW(p) · GW(p)

This boils down to: when do you know that your models are correct? And the answer is, you almost never know, unless it is already settled by science and even then there is room for error and further correction down the road(years away). But you need to make decisions now, every day.

Replies from: VKS
comment by VKS · 2012-11-02T20:49:52.906Z · LW(p) · GW(p)

Almost. It boils down to: when do you know that your models are correct and when do you know your feelings are correct. Well, how do you settle that question?

Replies from: roland
comment by roland · 2012-11-02T21:03:00.035Z · LW(p) · GW(p)

I don't know, but I have the impression that you have an answer in mind, care to share?

Replies from: VKS
comment by VKS · 2012-11-02T21:14:30.941Z · LW(p) · GW(p)

chaosmosis said it already :)

You don't have to treat your feelings and your models differently. Just use whichever one the evidence suggests is more likely to be correct in whichever situation you find you find yourself in. See?

Replies from: roland
comment by roland · 2012-11-02T21:29:41.503Z · LW(p) · GW(p)

Sounds good, but you still have to decide which one is more likely to be correct, so it doesn't seem to solve the fundamental question at hand.

Replies from: VKS
comment by VKS · 2012-11-02T21:42:45.146Z · LW(p) · GW(p)

Unfortunately, I can't help you with that, as you have your own models and feelings. You'll have to collect data on your own about which works better in what situation. You can probably start by going over past experiences to see if there are any apparent trends, and then just be mindful of any opportunity you might have to confirm or disconfirm any hypothesis you might generate. Watch out for unfalsifiables!

comment by chaosmosis · 2012-11-02T18:30:26.057Z · LW(p) · GW(p)

Empiricism and logic? Just treat your emotions like a model, and judge them like you would any other. Even though you can't see the inside of your emotions, neither can you see the inside of the thought processes that produce the model. I don't see why there would be any difference between the two.

Replies from: VKS
comment by VKS · 2012-11-02T20:50:41.695Z · LW(p) · GW(p)

yes

comment by kateblu · 2012-09-02T01:19:19.957Z · LW(p) · GW(p)
"If we define a religion to be a system of thought that contains unprovable statements, so it contains an element of faith, then Gödel has taught us that not only is mathematics a religion but it is the only religion able to prove itself to be one."

John Barrow, Pi in the Sky, 1992
Replies from: army1987, Will_Newsome
comment by A1987dM (army1987) · 2012-09-02T22:33:13.172Z · LW(p) · GW(p)

You might want to remove the space at the beginning of the line. It's distracting to have to use the scrollbar to read the full quote.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-09-03T04:41:58.964Z · LW(p) · GW(p)

also, how is math the only system able to prove that it has unproveable truths? that was missing from my copy of Godel's theorems.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-03T08:05:47.329Z · LW(p) · GW(p)

Did you mean to reply to kateblue rather than me? (Or did you want to evade the karma fee?)

Replies from: bbleeker, Jonathan_Graehl
comment by Sabiola (bbleeker) · 2012-09-05T23:28:51.719Z · LW(p) · GW(p)

I don't understand - what fee? Would Jonathan_Graehl get more downvotes if he replied directly to kateblu? Why?

Replies from: TimS
comment by TimS · 2012-09-05T23:58:58.696Z · LW(p) · GW(p)

There's a new "feature" that replies to sufficiently negative karma posts instantly lose 5 karma.

comment by Jonathan_Graehl · 2012-09-03T23:20:09.328Z · LW(p) · GW(p)

i was hoping you'd let my cowardice pass unnoted.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-04T06:48:47.696Z · LW(p) · GW(p)

Wait actual humans are afraid of losing karma?

...i dont even

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2012-09-04T11:17:16.879Z · LW(p) · GW(p)

Wait actual humans are afraid of losing karma?

Actual humans are afraid of being considered obnoxious, stupid or antisocial. Karma loss is just an indication that perception may be heading in that direction.

Replies from: Luke_A_Somers, Will_Newsome
comment by Luke_A_Somers · 2012-09-05T15:02:20.852Z · LW(p) · GW(p)

Attempts to avoid karma loss by procedural hacks are a stronger indication...

Replies from: Kindly, wedrifid
comment by Kindly · 2012-09-05T15:26:22.503Z · LW(p) · GW(p)

This is how lost purposes form. Once you've figured out that karma loss is a sign of something bad, you start avoiding it even when it's not a sign of that bad thing.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-07T08:45:39.754Z · LW(p) · GW(p)

Maybe wedrifid is taking that into account and renormalizing. It's hard to tell.

comment by wedrifid · 2012-09-05T15:25:26.666Z · LW(p) · GW(p)

Attempts to avoid karma loss by procedural hacks are a stronger indication...

Of something different.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-09-06T13:43:20.083Z · LW(p) · GW(p)

Of people assigning excessive weight to very small changes in signaling?

comment by Will_Newsome · 2012-09-04T11:26:32.790Z · LW(p) · GW(p)

#noshitsherlock

comment by NancyLebovitz · 2012-09-05T15:44:46.294Z · LW(p) · GW(p)

Yes.

I don't think it's any weirder than viewing losing karma as an entertaining game.

comment by Will_Newsome · 2012-09-02T01:29:20.984Z · LW(p) · GW(p)

Only on LessWrong would a statement with that much insight be downvoted because it could be taken to signal something vaguely positive about religion.

Replies from: VKS, Kindly, Nominull
comment by VKS · 2012-09-04T11:09:52.381Z · LW(p) · GW(p)

The quote, phrased in a less tortuous way, says that mathematics contains true statements that cannot be proven, and is unique in being able to demonstrate that it does. So far, so good, although the uniqueness part can be debated.

But the quote also states that mathematics therefore contains an element of faith, that is, that there exist statements that have to be assumed to be true. This is not the case.

Mathematics only compels you to believe that certain things follow from certain axioms. That is all. While these axioms sometimes imply that there exist statements whose truth will never be determined, they do not imply that we should then assume that such-and-such a statement is true or false.

That is why it should be downvoted. Because not knowing something doesn't mean having to pretend that you do.

comment by Kindly · 2012-09-02T03:56:41.042Z · LW(p) · GW(p)

I was tempted to downvote it because it could be taken to be negative about math.

Replies from: fubarobfusco, Will_Newsome
comment by fubarobfusco · 2012-09-02T04:22:24.769Z · LW(p) · GW(p)

It sounds to me like a goofy language game, akin to "How many legs does a dog have if we call a tail a leg?"

Replies from: Richard_Kennaway, kateblu
comment by Richard_Kennaway · 2012-09-03T12:04:43.782Z · LW(p) · GW(p)

It sounds to me like a goofy language game, akin to "How many legs does a dog have if we call a tail a leg?"

That conundrum, to which the correct answer is "four", is not a goofy language game. It is making the point that you cannot change the truth of a proposition by changing the meanings of the words in it. When you change the meanings of the words, you are creating a different proposition. It looks like the original one, because it consists of the same string of words, but it is not. Its truth need have nothing to do with the truth of the original one.

Would you still be able to see these words if we called black white?

Replies from: J_Taylor
comment by J_Taylor · 2012-09-04T00:04:26.933Z · LW(p) · GW(p)

I always hated that question due to its ambiguity. Those who state the answer is four legs seem to interpret the question as asking: "Labeling our current language as Language-A, and mentioning a different language Language-B in which 'leg' also refers to tails, and keeping in mind that we do not speak Language-B, how many legs does a dog have?"

However, for some reason I first interpreted the question as asking: "Labeling our current language as Language-A, and mentioning a different language Language-B in which 'leg' also refers to tails, what is the answer to 'how many legs does a dog have?' in Language-B?"

I apologies for both the brevity and ambiguity of these interpretations. However, I doubt that I am the only person who interprets the question in something along the lines of my fashion.

comment by kateblu · 2012-09-03T00:57:35.692Z · LW(p) · GW(p)

Definition is the basis of language. Without a common understanding of terms, there can be no discussion. Anything that has not been falsified is theory unless it is proven to be true. Without a common understanding of terms, how can we know that a statement has been proven false? Mathematics is the most rigorous language in the sense that there is nearly universal understanding of terms among professional mathematicians, but it is still a language. The answer to your question is unambiguous; if a dog has a set of appendages that we will call "Legs" that consists of four of what we commonly call legs plus one tail, then the number of elements in the set of "Legs" is equal to 5. We could say that the set L = {a,a,a,a,b). Either way, it is simply a matter of definition - not really a 'goofy game'.

Replies from: None, ArisKatsaris, fubarobfusco
comment by [deleted] · 2012-09-03T01:24:46.715Z · LW(p) · GW(p)

Definition is the basis of language.

Be wary when issuing grand proclamations about language, lest you wind up looking silly to the linguistically-knowledgeable.

comment by ArisKatsaris · 2012-09-04T10:18:54.923Z · LW(p) · GW(p)

Definition is the basis of language.

I think you have it the other way around. Definitions are based on language. Language is based on meaning. I knew the meaning of the word "red" before I had any definition for it, and I'd guess that so did you.

Replies from: kateblu
comment by kateblu · 2012-09-05T02:28:20.023Z · LW(p) · GW(p)

(with a smile) Perhaps we need to define definition. True that definitions are based on language. Also true, I believe, that if language is to communicate effectively, it will need commonly understood meanings for specific sounds/symbols. I may "see as red" what you "see as orange". My guess is that we both saw and could differentiate between colors before we knew the commonly accepted terms for them.

comment by fubarobfusco · 2012-09-03T08:30:57.130Z · LW(p) · GW(p)

I had assumed the audience had heard the joke before. The punch line: "Four. Calling a tail a leg doesn't make it one."

Which is the sort of thing that could be called "problematic on so many levels" — or just "goofy".

comment by Will_Newsome · 2012-09-02T03:58:14.607Z · LW(p) · GW(p)

Modus pwnens, modus trollens.

comment by Nominull · 2012-09-02T03:35:24.555Z · LW(p) · GW(p)

That sounds positive about religion to you?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-09-02T04:06:44.067Z · LW(p) · GW(p)

It vaguely associates math with religion in the sort of mind that is in the human that reads LessWrong. In such minds that means it is both saying something positive about religion and negative about math. That means it deserves twice the downvotes obviously. Let's downvote it, Nominull! I'll downvote it if you do. Let's start a circle-jerking session, Nominull. I will if you do. Agreed?

Replies from: khafra
comment by khafra · 2012-09-03T00:46:15.149Z · LW(p) · GW(p)

Circle-jerkIng is like more like a stag hunt; only implicit cooperation and an explicit lack of shame are required.

comment by Jayson_Virissimo · 2012-09-01T08:30:04.379Z · LW(p) · GW(p)

Mystic, n. Anyone who disagrees with Ayn Rand or James Randi.

-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment

Replies from: Vivid, Will_Newsome
comment by Vivid · 2012-09-01T15:06:45.806Z · LW(p) · GW(p)

You should be less transparent about your social psychology experiments if you don't want people like me to make them transparent to everyone else.

comment by Will_Newsome · 2012-09-01T15:07:16.243Z · LW(p) · GW(p)

You should be less transparent about your social psychology experiments if you don't want people like me to make them transparent to everyone else. I like to disrupt things, you see. So does reality.