Rationality Quotes August 2012

post by Alejandro1 · 2012-08-03T15:33:53.905Z · LW · GW · Legacy · 432 comments

Contents

432 comments

Here's the new thread for posting quotes, with the usual rules:

432 comments

Comments sorted by top scores.

comment by Delta · 2012-08-03T10:41:45.719Z · LW(p) · GW(p)

“Ignorance killed the cat; curiosity was framed!” ― C.J. Cherryh

(not sure if that is who said it originally, but that's the first creditation I found)

comment by summerstay · 2012-08-04T14:41:24.310Z · LW(p) · GW(p)

Interviewer: How do you answer critics who suggest that your team is playing god here?

Craig Venter: Oh... we're not playing.

comment by Alejandro1 · 2012-08-02T20:52:13.718Z · LW(p) · GW(p)

British philosophy is more detailed and piecemeal than that of the Continent; when it allows itself some general principle, it sets to work to prove it inductively by examining its various applications. Thus Hume, after announcing that there is no idea without an antecedent impression, immediately proceeds to consider the following objection: suppose you are seeing two shades of colour which are similar but not identical, and suppose you have never seen a shade of colour intermediate between the two, can you nevertheless imagine such a shade? He does not decide the question, and considers that a decision adverse to his general principle would not be fatal to him, because his principle is not logical but empirical. When--to take a contrast--Leibniz wants to establish his monadology, he argues, roughly, as follows: Whatever is complex must be composed of simple parts; what is simple cannot be extended; therefore everything is composed of parts having no extension. But what is not extended is not matter. Therefore the ultimate constituents of things are not material, and, if not material, then mental. Consequently a table is really a colony of souls.

The difference of method, here, may be characterized as follows: In Locke or Hume, a comparatively modest conclusion is drawn from a broad survey of many facts, whereas in Leibniz a vast edifice of deduction is pyramided upon a pin-point of logical principle. In Leibniz, if the principle is completely true and the deductions are entirely valid, all is well; but the structure is unstable, and the slightest flaw anywhere brings it down in ruins. In Locke or Hume, on the contrary, the base of the pyramid is on the solid ground of observed fact, and the pyramid tapers upward, not downward; consequently the equilibrium is stable, and a flaw here or there can be rectified without total disaster.

--Bertrand Russell, A History of Western Philosophy

Replies from: Laoch, hankx7787
comment by Laoch · 2012-08-04T13:56:23.043Z · LW(p) · GW(p)

I often find that I'm not well read enough or perhaps not smart enough to decipher the intricate language of these eminent philosophers. I'd like to know is Russell talking about something akin to scientific empiricism? Can someone enlighten me? From my shallow understanding though, it seems like what he is saying is almost common sense when it comes to building knowledge or beliefs about a problem domain.

Replies from: Alejandro1
comment by Alejandro1 · 2012-08-04T14:13:16.856Z · LW(p) · GW(p)

The idea that one should not philosophize keeping close contact with empirical facts, instead of basing a long chain of arguments on abstract "logical" principles like Leibniz's, may be almost common sense now, but it wasn't in the early modern period of which Russell was talking about. And when Russell wrote this (1940s) he was old enough to remember that these kind of arguments were still prevalent in his youth (1880s-1890s) among absolute idealists like Bradley, as he describes in "Our Knowledge of the External World" (follow the link and do a Ctrl-F search for Bradley). So it did not seem to him a way of thinking that was so ancient and outdated as to be not worth arguing against.

ETA: I meant, "The idea that one should philosophize keeping...", without not, obviously.

Replies from: Laoch
comment by Laoch · 2012-08-04T14:41:31.619Z · LW(p) · GW(p)

Ah very good, in that context it makes perfect sense.

comment by hankx7787 · 2012-08-04T06:58:15.377Z · LW(p) · GW(p)

Russell gives too much credit to radical empiricism fails to warn against the dangers of going too far in the direction of radical empiricism, which is really just as bad as radical rationalism.

Philosophers came to be divided into two camps: those who claimed that man obtains his knowledge of the world by deducing it exclusively from concepts, which come from inside his head and are not derived from the perception of physical facts (the Rationalists)—and those who claimed that man obtains his knowledge from experience, which was held to mean: by direct perception of immediate facts, with no recourse to concepts (the Empiricists). To put it more simply: those who joined the mystics by abandoning reality—and those who clung to reality, by abandoning their mind.

FTNI, by Ayn Rand

Replies from: Alejandro1
comment by Alejandro1 · 2012-08-04T13:55:42.955Z · LW(p) · GW(p)

I wasn't trying to endorse the whole empiricist philosophy, and neither was Russell, at least in this quote. The rationality lesson it offers is not "radical empiricism good, radical rationalism bad" but more like "a wide base of principles with connections to experience good, a small base of abstract logical principles bad".

Replies from: hankx7787, Laoch
comment by hankx7787 · 2012-08-04T17:06:04.600Z · LW(p) · GW(p)

er, I agree my comment was poorly phrased. Instead of accusing him of giving positive credit to radical empiricism I probably should have said, while he's making a good point warning against the dangers of radical rationalism, he was failing to warn against the dangers of going too far in the direction of empiricism.

That's why I prefer the quote I followed up with, it is more careful to reject both of these approaches.

comment by Laoch · 2012-08-07T12:18:11.701Z · LW(p) · GW(p)

Recognising the weaknesses inherent in human logical deductions?

comment by J_Taylor · 2012-08-03T02:09:49.737Z · LW(p) · GW(p)

If you argue with a madman, it is extremely probable that you will get the worst of it; for in many ways his mind moves all the quicker for not being delayed by the things that go with good judgment. He is not hampered by a sense of humour or by charity, or by the dumb certainties of experience.

-- G. K. Chesterton, Orthodoxy

comment by Peter Wildeford (peter_hurford) · 2012-08-03T00:17:53.552Z · LW(p) · GW(p)

All of the books in the world contain no more information than is broadcast as video in a single large American city in a single year. Not all bits have equal value.

Carl Sagan

Replies from: Nisan, Luke_A_Somers, thomblake
comment by Nisan · 2012-08-03T07:31:40.556Z · LW(p) · GW(p)

Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact.

— Nick Szabo, quoted elsewhere in this post. Fight!

Replies from: faul_sname
comment by faul_sname · 2012-08-03T17:32:03.461Z · LW(p) · GW(p)

Knowledge and information are different things. An audiobook takes up more hard disk space than an e-book, but they both convey the same knowledge.

Replies from: Never_Seen_Belgrade
comment by Never_Seen_Belgrade · 2012-08-19T16:03:18.076Z · LW(p) · GW(p)

"Comparing information and knowledge is like asking whether the fatness of a pig is more or less green than the designated hitter rule." -- David Guaspari

Replies from: faul_sname
comment by faul_sname · 2012-08-20T01:17:12.905Z · LW(p) · GW(p)

I now have coffee on my monitor.

comment by Luke_A_Somers · 2012-08-03T13:00:07.158Z · LW(p) · GW(p)

This is one of the obvious facts that made me recoil in horror while reading Neuromancer. Their currency is BITS? Bits of what?

Replies from: Pfft
comment by Pfft · 2012-08-03T15:35:05.151Z · LW(p) · GW(p)

Are you sure you are thinking of the right novel? Searching this for the word "bit" did not find anything.

Replies from: thomblake, Luke_A_Somers
comment by thomblake · 2012-08-03T16:05:35.594Z · LW(p) · GW(p)

He may have been thinking of My Little Pony: Friendship is Magic.

Replies from: thomblake
comment by thomblake · 2012-08-03T18:23:30.587Z · LW(p) · GW(p)

Was the parent upvoted because people thought it was funny, or because they thought I had provided the correct answer, or because I mentioned ponies, or some other reason?

Replies from: Armok_GoB, J_Taylor
comment by Armok_GoB · 2012-08-03T20:21:15.535Z · LW(p) · GW(p)

probably because you mentioned ponies.

Replies from: ChrisPine
comment by ChrisPine · 2012-08-05T19:06:46.737Z · LW(p) · GW(p)

Which got even more upvotes... [sigh]

Please don't become reddit!

comment by J_Taylor · 2012-08-03T22:47:33.256Z · LW(p) · GW(p)

Yes.

comment by Luke_A_Somers · 2012-08-06T10:24:32.397Z · LW(p) · GW(p)

Apparently so! Then, which book was it?? Shoot.

comment by thomblake · 2012-08-03T16:07:04.422Z · LW(p) · GW(p)

I think this is just a misuse of the word "information". If the bits aren't equal value, clearly they do not have the same amount of information.

Replies from: Omegaile
comment by Omegaile · 2012-08-03T17:01:40.994Z · LW(p) · GW(p)

I think value was used meaning importance.

Replies from: Pentashagon
comment by Pentashagon · 2012-08-03T22:00:06.432Z · LW(p) · GW(p)

Clearly some bits have value 0, while others have value 1.

comment by NancyLebovitz · 2012-08-08T16:43:07.375Z · LW(p) · GW(p)

But I came to realize that I was not a wizard, that "will-power" was not mana, and I was not so much a ghost in the machine, as a machine in the machine.

Ta-nehisi Coates

comment by roland · 2012-08-03T08:56:07.310Z · LW(p) · GW(p)

Yes -- and to me, that's a perfect illustration of why experiments are relevant in the first place! More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then hopefully we've learned why the experiment wasn't necessary to begin with -- why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves! --Scott Aaronson

Replies from: faul_sname
comment by faul_sname · 2012-08-03T17:39:22.523Z · LW(p) · GW(p)

Or at least confirmation bias makes it seem that way.

Replies from: roland
comment by roland · 2012-08-03T20:49:06.970Z · LW(p) · GW(p)

Also hindsight bias. But I still think the quote has a perfectly valid point.

Replies from: faul_sname
comment by faul_sname · 2012-08-04T19:54:51.283Z · LW(p) · GW(p)

Agreed.

comment by Incorrect · 2012-08-02T23:13:29.208Z · LW(p) · GW(p)

It is absurd to divide people into good and bad. People are either charming or tedious.

-- Oscar Wilde

Replies from: Eliezer_Yudkowsky, MixedNuts, VKS, Kyre, Eugine_Nier, army1987, Nisan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-03T06:00:43.411Z · LW(p) · GW(p)

Thank you, Professor Quirrell.

Replies from: Clippy
comment by Clippy · 2012-08-03T20:37:55.671Z · LW(p) · GW(p)

That quote is attributed to Oscar Wilde, not Professor Quirrell.

Or is Oscar Wilde the same being as Professor Quirrell?

Replies from: Viliam_Bur, shminux
comment by Viliam_Bur · 2012-08-04T13:50:46.088Z · LW(p) · GW(p)

Oscar Wilde vary most == I was scary Voldemort

It does not make sense, but it still is some evidence pointing at Oscar Wilde.

Replies from: Multiheaded, tgb
comment by Multiheaded · 2012-08-06T10:55:33.221Z · LW(p) · GW(p)

By such reasoning, Eliezer's own work shows very clear signs of being sorcerous and/or divinely preordained.

...Perhaps he shouldn't have gone there if he still wants to pretend that he's not in a covenant with scary unfathomable mathematical constructs, eh?

comment by tgb · 2012-08-04T14:14:23.098Z · LW(p) · GW(p)

Why did I find this so amusing?

comment by Shmi (shminux) · 2012-08-08T04:48:56.386Z · LW(p) · GW(p)

Presumably that's the first thing dark lords (and their real-life equivalents) convince themselves of, that there is no inherent good and evil. Once that part is over with, anything you do can be classified as good.

Replies from: Nornagest, loserthree, wedrifid
comment by Nornagest · 2012-08-08T06:25:32.235Z · LW(p) · GW(p)

Can't speak to any fictional dark lords, but the real-life equivalent seems more prone to deciding that there is an evil, which is true evil, and which is manifest upon the world in the person of those guys over there.

At least, that's what the rhetoric pretty consistently says. Either a given dark-lordish individual is a very good liar or actually believes it, and knowing what we do about ideology and the prevalence of sociopathy I'm inclined to default to the latter.

(I wouldn't say that Oscar Wilde and others with his interaction style particularly resemble dark lords, though.)

comment by loserthree · 2012-08-08T09:36:13.121Z · LW(p) · GW(p)

Presumably that's the first thing dark lords (and their real-life equivalents) convince themselves of, that there is no inherent good and evil.

The hell is the real-life equivalent of a dark lord? Can that even be addressed without getting into discouraged topics?

Also, "convince" implies not only intent but that the individual started with a different belief, maybe even that it is universal to start with a belief in good as evil. That sounds like a couple of unwarranted assumptions.

On a personal note, I once expressed the belief that there was no good or evil. I did so privately because I well understood there are undesirable consequences of sharing that belief. Before that time I had spent much thought over much of my young life trying to make sense of the concepts, to define them in ways that were consistent and useful, and was constantly frustrated.

I did not convince myself that there is no inherent good and evil so much as I gave up on trying to convince myself to believe otherwise. I expect a fictional 'dark lord' or real-life 'successful and wildly powerful individual of objectionable character' could as easily experience the same surrender among a larger number of alternative ways to leave good and evil behind.

(On a further and more indulgently personal note, I've since become disinterested in any requirement for good or evil to be 'inherent:' good and evil do not need to be applied in a perfectly consistent fashion in order to be useful. And it happens that I am evil and likewise disinterested in being good for goodness' sake.)

Once that part is over with, anything you do can be classified as good.

I may misunderstand this due to one or more philosophical shortcomings, but if why bother classifying anything as 'good' if you've left 'good' behind?

Replies from: Desrtopa
comment by Desrtopa · 2012-08-17T19:00:26.140Z · LW(p) · GW(p)

The hell is the real-life equivalent of a dark lord? Can that even be addressed without getting into discouraged topics?

I'd be inclined to think along the lines of Pol Pot, Kim Il Sung and Kim Jong Il, Mao Zedong, Joseph Stalin, etc.

I think Nornagest's comment provides a more accurate characterization.

Replies from: loserthree
comment by loserthree · 2012-08-18T15:34:46.998Z · LW(p) · GW(p)

I think Nornagest's comment provides a more accurate characterization.

Yes. Newbs deny the relevance of good and evil; dark lords recognize extraordinarily useful tools when they see them.

I'd be inclined to think along the lines of Pol Pot, Kim Il Sung and Kim Jong Il, Mao Zedong, Joseph Stalin, etc.

I think your list of dark lords is padded. I'm pretty sure there's at least one well-intentioned idealist in there and Kim Jong Il probably wasn't much to speak of in the 'lord' department.

Replies from: Desrtopa
comment by Desrtopa · 2012-08-18T19:14:14.297Z · LW(p) · GW(p)

I suspect they all had good intentions on some level, although they probably thought they were justified in getting personal perks for their great work.

I'd say that being the absolute ruler of a country, subject to practically fanatical hero worship, is enough to qualify one as a "lord" even if it's a pretty lousy country and you do a crap job of running it. It's not as if any of them were particularly competent.

As for "padding," there are plenty of other examples I could have used, but I didn't expect as many readers to recognize, say, Teodoro Obiang Nguema Mbasogo.

Replies from: army1987, loserthree
comment by A1987dM (army1987) · 2012-08-20T09:13:47.264Z · LW(p) · GW(p)

I didn't expect as many readers to recognize, say, Teodoro Obiang Nguema Mbasogo (or to be willing to Google him).

FTFY. :-)

comment by loserthree · 2012-08-19T01:06:05.377Z · LW(p) · GW(p)

I suspect they all had good intentions on some level, although they probably thought they were justified in getting personal perks for their great work.

I'd say that being the absolute ruler of a country, subject to practically fanatical hero worship, is enough to qualify one as a "lord" even if it's a pretty lousy country and you do a crap job of running it. It's not as if any of them were particularly competent.

If you want to claim that intention and ability are meaningless, please come right out and say so. If you please, also describe what is left to a '"dark lord" if evil intent and the ability to achieve it are -- Waitaminute.

We're skirting an argument of definition here, so I'll just skip the quibbling and jump straight to attacks on your character, if you don't mind:

You are not even trying to contribute, here. You're just swinging at anything that gets close, assured that the contrarian audience in your imagination will admire the wide, wild arcs your bat carves out of empty space.

Stalin and Mao incompetent? Do you beleive that clawing one's way to the top of an organization of that size and overseeing it's operation and -- yes, after a fashion -- prosperity is something that any chump within one standard deviation of the mean could stumble into like a Lotto winner?

Cold, quiet heavens, no. It takes a special breed with special lessons just to pull that off in a safe, civil environment. Doing so in place where promotions are obtained with obituaries filters for even more specialized aptitudes. Average people, Lotto winners, incompetents don't even last long on their own.

As for "padding," there are plenty of other examples I could have used, but I didn't expect as many readers to recognize, say, Teodoro Obiang Nguema Mbasogo.

Yes, I accused you of namedropping without understanding. I gave you a bit of wiggle room so you could weave a more flattering narrative out of your actions. I threw you a rope but you preferred your shovel.

TL;DR: I see you trollin'.

Replies from: Desrtopa
comment by Desrtopa · 2012-08-19T20:48:16.985Z · LW(p) · GW(p)

If you want to claim that intention and ability are meaningless, please come right out and say so. If you please, also describe what is left to a '"dark lord" if evil intent and the ability to achieve it are -- Waitaminute.

We're skirting an argument of definition here, so I'll just skip the quibbling and jump straight to attacks on your character, if you don't mind:

As a matter of fact I do mind, and I'm more than a little insulted. You could just ask what I mean by a Dark Lord if I don't expect it to entail deliberate evil or competence.

If someone is a totalitarian ruler who knowingly and willingly causes the deaths of a large proportion of their citizens and imposes policies that contribute to low levels of civil liberties and standards of living, I think it's fair to describe them as the real life equivalent of "dark lords," although I wouldn't describe them as such in ordinary conversation, and if you look at the context of the conversation there's nothing to imply that I would.

I think this category carves out a significant body of individuals with related characteristics. I also think that, given what we know about human nature, it's unlikely that they see themselves as people doing bad things.

Stalin and Mao incompetent? Do you beleive that clawing one's way to the top of an organization of that size and overseeing it's operation and -- yes, after a fashion -- prosperity is something that any chump within one standard deviation of the mean could stumble into like a Lotto winner?

Cold, quiet heavens, no. It takes a special breed with special lessons just to pull that off in a safe, civil environment. Doing so in place where promotions are obtained with obituaries filters for even more specialized aptitudes. Average people, Lotto winners, incompetents don't even last long on their own.

I was referring to competence at running countries, not competence at climbing social ladders. Clearly they possessed a considerable measure of the latter, but then, all of them instituted policies which could have been predicted as disastrous by people with even an ordinary measure of good sense.

It might be narratively appealing to imagine that our greatest real life villains are like Professor Quirrell, amoral and brilliant, but their actual track records suggest that while they may be good at social maneuvering, they aren't possessed of particularly good judgment or skills of self analysis.

If you want to argue my points, and get into the actual policies and psychology of these people, feel free to. But if you're going to skip straight to accusations of poor conduct and character without even bothering to ask me to clarify my point, I'm going to accuse you of being excessively hostile and having poor priors for good faith in this community.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-20T09:19:34.608Z · LW(p) · GW(p)

all of them instituted policies which could have been predicted as disastrous by people with even an ordinary measure of good sense.

You might be overestimating how much an ‘ordinary’ measure of good sense is. (Half the human population have IQs below 100.)

comment by wedrifid · 2012-08-08T05:59:50.362Z · LW(p) · GW(p)

Presumably that's the first thing dark wizards (and their real-life equivalents) convince themselves of, that there is no inherent good and evil.

That seems true... Interesting.

comment by MixedNuts · 2012-08-10T08:27:23.542Z · LW(p) · GW(p)

That's excellent advice for writing fiction. Audiences root for charming characters much more than for good ones. Especially useful when your world only contains villains. This is harder in real life, since your opponents can ignore your witty one-liners and emphasize your mass murders.

(This comment brought to you by House Lannister.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-10T23:07:44.752Z · LW(p) · GW(p)

This is harder in real life, since your opponents can ignore your witty one-liners and emphasize your mass murders.

The scary thing is how often it does work in real life. (Except that in real life charm is more than just witty one-liners.

comment by VKS · 2012-08-06T02:57:48.853Z · LW(p) · GW(p)

I don't know that you can really classify people as X or ¬X. I mean, have you not seen individuals be X in certain situations and ¬X in other situations?

&c.

comment by Kyre · 2012-08-04T10:10:09.015Z · LW(p) · GW(p)

On the face of it I would absolutely disagree with Wilde on that: to live a moral life one absolutely needs to distinguish between good and bad. Charm (in bad people) and tedium (in good people) get in the way of this.

On the other hand, was Wilde really just blowing a big raspberry at the moralisers of his day ? Sort of saying "I care more about charm and tedium than what you call morality". I don't know enough about his context ...

Replies from: tgb, Incorrect
comment by tgb · 2012-08-04T14:21:12.166Z · LW(p) · GW(p)

Since I can't be bothered to do real research, I'll just point out that this Yahoo answer says that the quote is spoken by Lord Darlington. Oscar Wilde was a humorist and an entertainer. He makes amusing characters. His characters say amusing things.

Do not read too much into this quote and, without further evidence, I would not attribute this philosophy to Oscar Wilde himself.

(I haven't read Lady Windermere's Fan, where this if from, but this sounds very much like something Lord Henry from The Picture of Dorian Gray would say. And Lord Henry is one of the main causes of the Dorian's fall from grace in this book; he's not exactly a very positive character but certainly an entertainingly cynical one!)

comment by Incorrect · 2012-08-04T14:11:18.957Z · LW(p) · GW(p)

On the face of it I would absolutely disagree with Wilde on that: to live a moral life one absolutely needs to distinguish between good and bad.

But is it necessary to divide people into good and bad? What if you were only to apply goodness and badness to consequences and to your own actions?

Replies from: dspeyer
comment by dspeyer · 2012-08-05T23:01:55.002Z · LW(p) · GW(p)

If your own action is to empower another person, understanding that person's goodness or badness is necessary to understanding the action's goodness or badness.

Replies from: Incorrect
comment by Incorrect · 2012-08-06T02:33:16.405Z · LW(p) · GW(p)

But that can be entirely reduced to the goodness or badness of consequences.

comment by A1987dM (army1987) · 2012-08-02T23:37:37.374Z · LW(p) · GW(p)

I like it, but what's it got to do with rationality?

Replies from: None
comment by [deleted] · 2012-08-03T07:05:44.425Z · LW(p) · GW(p)

To me at least, it captures the notion of how the perceived Truth/Falsity of a belief rest solely in our categorization of it as 'tribal' or 'non-tribal': weird or normal. Normal beliefs are true, weird beliefs are false.

We believe our friends more readily than experts.

comment by Nisan · 2012-08-03T16:46:32.818Z · LW(p) · GW(p)

It is absurd to divide people into charming or tedious. People either have familiar worldviews or unfamiliar worldviews.

Replies from: DaFranker
comment by DaFranker · 2012-08-03T16:51:00.205Z · LW(p) · GW(p)

It is absurd to divide people into familiar worldviews or unfamiliar worldviews. People either have closer environmental causality or farther environmental causality.

(anyone care to formalize the recursive tower?)

Replies from: faul_sname
comment by faul_sname · 2012-08-03T17:48:34.859Z · LW(p) · GW(p)

It's absurd to divide people into two categories and expect those two categories to be meaningful in more than a few contexts.

Replies from: Stabilizer, army1987, Clippy
comment by Stabilizer · 2012-08-03T21:05:32.153Z · LW(p) · GW(p)

It is absurd to divide people. They tend to die if you do that.

Replies from: Kindly
comment by Kindly · 2012-08-04T00:19:55.074Z · LW(p) · GW(p)

It's absurd to divide. You tend to die if you do that.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-08-04T16:45:07.359Z · LW(p) · GW(p)

It's absurd: You tend to die.

Replies from: faul_sname
comment by faul_sname · 2012-08-04T19:55:29.374Z · LW(p) · GW(p)

It's absurd to die.

Replies from: albeola
comment by albeola · 2012-08-04T20:43:51.031Z · LW(p) · GW(p)

It's bs to die.

Replies from: Epiphany, Decius
comment by Epiphany · 2012-08-18T05:11:50.178Z · LW(p) · GW(p)

Be.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-20T09:24:39.296Z · LW(p) · GW(p)

“To do is to be” -- Nietzsche

“To be is to do” -- Kant

“Do be do be do” -- Sinatra

comment by Decius · 2012-08-04T21:50:42.178Z · LW(p) · GW(p)

Nobody alive has died yet.

Replies from: dspeyer, mfb
comment by dspeyer · 2012-08-05T22:59:21.035Z · LW(p) · GW(p)

It will be quick. It might even be painless. I would not know. I have never died.

-- Voldemort

comment by mfb · 2012-08-04T22:11:46.534Z · LW(p) · GW(p)

At least not in worlds where he is alive.

Replies from: Decius
comment by Decius · 2012-08-05T00:48:16.178Z · LW(p) · GW(p)

Is it worse to enter a state of superimposed death and life than to die?

Replies from: wedrifid, mfb
comment by wedrifid · 2012-08-06T18:11:28.484Z · LW(p) · GW(p)

Is it worse to enter a state of superimposed death and life than to die?

I hope not. That's the state we are all in now and what we are entering constantly. Unless there are rounding errors in the universe we haven't detected yet.

comment by mfb · 2012-08-06T11:25:04.643Z · LW(p) · GW(p)

I think life requires a system large and complex enough to produce decoherence between "alive" and "dead" in timescales shorter than required to define "alive" at all.

Replies from: Decius
comment by Decius · 2012-08-06T18:08:37.680Z · LW(p) · GW(p)

Sorry, that was a Schrodinger's Cat joke.

comment by A1987dM (army1987) · 2012-08-04T00:43:19.351Z · LW(p) · GW(p)

“Males” and “females”. (OK, there are edge cases and stuff, but this doesn't mean the categories aren't meaningful, does it?)

comment by Clippy · 2012-08-03T20:38:37.199Z · LW(p) · GW(p)

What about good vs bad humans?

Replies from: faul_sname
comment by faul_sname · 2012-08-04T19:52:48.941Z · LW(p) · GW(p)

Or humans who create paperclips versus those who don't?

Replies from: Clippy
comment by Clippy · 2012-08-05T00:28:55.607Z · LW(p) · GW(p)

I thought I just said that.

Replies from: MatthewBaker
comment by MatthewBaker · 2012-08-11T00:31:44.352Z · LW(p) · GW(p)

Can't their be good humans who don't create paperclips and just destroy antipaperclips and staples and such?

Replies from: Clippy
comment by Clippy · 2012-08-14T00:20:47.111Z · LW(p) · GW(p)

Destroying antipaperclips is creating paperclips.

I didn't know humans had the concept though.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-20T09:26:02.046Z · LW(p) · GW(p)

What is an antipaperclip?

Replies from: Clippy
comment by Clippy · 2012-08-21T23:50:16.386Z · LW(p) · GW(p)

Anything not a paperclip, or in opposition to further paperclipping. You might ask, "Why not just say 'non-paperclips'?" but anti-paperclips include paperclips deliberately designed to unbend, or which work at anti-paperclip purposes (say, a paperclip being used to short-circuit the electrical systems in a paperclip factory).

Replies from: MatthewBaker
comment by MatthewBaker · 2012-09-06T23:38:54.233Z · LW(p) · GW(p)

I brought a box of paperclips into my office today to use as bowl picks for my new bong, if I rebend them after I use them can I avoid becoming an anti-paperclip?

comment by frostgiant · 2012-08-08T02:13:24.883Z · LW(p) · GW(p)

The problem with Internet quotes and statistics is that often times, they’re wrongfully believed to be real.

— Abraham Lincoln

comment by GLaDOS · 2012-08-06T10:04:20.519Z · LW(p) · GW(p)

The findings reveal that 20.7% of the studied articles in behavioral economics propose paternalist policy action and that 95.5% of these do not contain any analysis of the cognitive ability of policymakers.

-- Niclas Berggren, source and HT to Tyler Cowen

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-08-08T03:49:39.896Z · LW(p) · GW(p)

Sounds like a job for...Will_Newsome!

EDIT: Why the downvotes? This seems like a fairly obvious case of researchers going insufficiently meta.

Replies from: MatthewBaker, Jayson_Virissimo
comment by MatthewBaker · 2012-08-10T19:50:13.068Z · LW(p) · GW(p)

META MAN! willnewsomecuresmetaproblemsasfastashecan META MAN!

comment by Jayson_Virissimo · 2012-08-08T05:36:02.472Z · LW(p) · GW(p)

Why the downvotes? This seems like an obvious case of researchers going insufficiently meta.

comment by bungula · 2012-08-03T07:28:59.426Z · LW(p) · GW(p)

“I drive an Infiniti. That’s really evil. There are people who just starve to death – that’s all they ever did. There’s people who are like, born and they go ‘Uh, I’m hungry’ then they just die, and that’s all they ever got to do. Meanwhile I’m driving in my car having a great time, and I sleep like a baby.

It’s totally my fault, ’cause I could trade my Infiniti for a [less luxurious] car… and I’d get back like $20,000. And I could save hundreds of people from dying of starvation with that money. And everyday I don’t do it. Everyday I make them die with my car.”

Louis C.K.

Replies from: Jayson_Virissimo, DanielLC
comment by Jayson_Virissimo · 2012-08-03T13:23:26.597Z · LW(p) · GW(p)

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befal himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own. To prevent, therefore, this paltry misfortune to himself, would a man of humanity be willing to sacrifice the lives of a hundred millions of his brethren, provided he had never seen them? Human nature startles with horror at the thought, and the world, in its greatest depravity and corruption, never produced such a villain as could be capable of entertaining it. But what makes this difference? When our passive feelings are almost always so sordid and so selfish, how comes it that our active principles should often be so generous and so noble? When we are always so much more deeply affected by whatever concerns ourselves, than by whatever concerns other men; what is it which prompts the generous, upon all occasions, and the mean upon many, to sacrifice their own interests to the greater interests of others? It is not the soft power of humanity, it is not that feeble spark of benevolence which Nature has lighted up in the human heart, that is thus capable of counteracting the strongest impulses of self-love. It is a stronger power, a more forcible motive, which exerts itself upon such occasions. It is reason, principle, conscience, the inhabitant of the breast, the man within, the great judge and arbiter of our conduct. It is he who, whenever we are about to act so as to affect the happiness of others, calls to us, with a voice capable of astonishing the most presumptuous of our passions, that we are but one of the multitude, in no respect better than any other in it; and that when we prefer ourselves so shamefully and so blindly to others, we become the proper objects of resentment, abhorrence, and execration. It is from him only that we learn the real littleness of ourselves, and of whatever relates to ourselves, and the natural misrepresentations of self-love can be corrected only by the eye of this impartial spectator. It is he who shows us the propriety of generosity and the deformity of injustice; the propriety of resigning the greatest interests of our own, for the yet greater interests of others, and the deformity of doing the smallest injury to another, in order to obtain the greatest benefit to ourselves. It is not the love of our neighbour, it is not the love of mankind, which upon many occasions prompts us to the practice of those divine virtues. It is a stronger love, a more powerful affection, which generally takes place upon such occasions; the love of what is honourable and noble, of the grandeur, and dignity, and superiority of our own characters.

-Adam Smith, The Theory of Moral Sentiments

Replies from: Richard_Kennaway, None, buybuydandavis, Eliezer_Yudkowsky
comment by Richard_Kennaway · 2012-08-03T13:45:23.781Z · LW(p) · GW(p)

And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident [as the destruction of China] had happened.

Now that we are informed of disasters worldwide as soon as they happen, and can give at least money with a few mouse clicks, we can put this prediction to the test. What in fact we see is a very great public response to such disasters as the Japanese earthquake and tsunami.

Replies from: Petra
comment by Petra · 2012-08-03T14:01:45.788Z · LW(p) · GW(p)

What in fact we see is a very great public response to such disasters as the Japanese earthquake and tsunami.

True, but first of all, the situation posited is one in which China is "swallowed up". If a disaster occurred, and there was no clear way for the generous public to actually help, do you think you would see the same response? I'm sure you would still have the same loud proclamations of tragedy and sympathy, but would there be action to match it? I suppose it's possible that they would try to support the remaining Chinese who presumably survived by not being in China, but it seems unlikely to me that the same concerted aid efforts would exist.

Secondly, it seems to me that Smith is talking more about genuine emotional distress and lasting life changes than simply any kind of reaction. Yes, people donate money for disaster relief, but do they lose sleep over it? (Yes, there are some people who drop everything and relocate to physically help, but they are the exception.) Is a $5 donation to the Red Cross more indicative of genuine distress and significant change, or the kind of public sympathy that allows the person to return to their lives as soon as they've sent the text?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-03T14:24:56.323Z · LW(p) · GW(p)

If a disaster occurred, and there was no clear way for the generous public to actually help, do you think you would see the same response?

If help is not possible, obviously there will be no help. But in real disasters, there always is a way to help, and help is always forthcoming.

Replies from: J_Taylor
comment by J_Taylor · 2012-08-03T22:37:42.117Z · LW(p) · GW(p)

Even if help is not possible, there will be "help."

comment by [deleted] · 2012-08-03T16:35:02.754Z · LW(p) · GW(p)

.

Replies from: DaFranker, None
comment by DaFranker · 2012-08-03T16:48:13.492Z · LW(p) · GW(p)

Paragraphs cost lines, and when each line of paper on average costs five shillings, you use as many of them as you can get away with.

Replies from: None
comment by [deleted] · 2012-08-03T17:05:55.004Z · LW(p) · GW(p)

.

Replies from: DaFranker
comment by DaFranker · 2012-08-03T17:13:36.973Z · LW(p) · GW(p)

I support this motion, and further propose that formatting and other aesthetic considerations also be inferred from known data on the authors to fully reflect the manner in which they would have presented their work had they been aware of and capable of using all our current nice-book-writing technology.

...which sounds a lot like Eliezer's Friendly AI "first and final command". (I would link to the exact quote, but I've lost the bookmark. Will edit it in once found.)

Replies from: None
comment by [deleted] · 2012-08-03T17:17:08.808Z · LW(p) · GW(p)

.

Replies from: maia, James_K
comment by maia · 2012-08-03T19:38:10.963Z · LW(p) · GW(p)

Some writers were paid by the word and/or line.

comment by James_K · 2012-08-03T21:52:36.566Z · LW(p) · GW(p)

I think much of it is that brevity simply wasn't seen as a virtue back then. There were far fewer written works, so you had more time to go through each one.

Replies from: gwern, Eliezer_Yudkowsky
comment by gwern · 2012-08-04T01:39:58.038Z · LW(p) · GW(p)

I think it's the vagary of various times. All periods had pretty expensive media and some were, as one would expect, terse as hell. (Reading a book on Nagarjuna, I'm reminded that reading his Heart of the Middle Way was like trying to read a math book with nothing but theorems. And not even the proofs. 'Wait, could you go back and explain that? Or anything?') Latin prose could be very concise. Biblical literature likewise. I'm told much Chinese literature is similar (especially the classics), and I'd believe it from the translations I've read.

Some periods praised clarity and simplicity of prose. Others didn't, and gave us things like Thomas Browne's Urn Burial.

(We also need to remember that we read difficulty as complexity. Shakespeare is pretty easy to read... if you have a vocabulary so huge as to overcome the linguistic drift of 4 centuries and are used to his syntax. His contemporaries would not have had such problems.)

Replies from: None
comment by [deleted] · 2012-08-04T02:26:53.264Z · LW(p) · GW(p)

I'm told much Chinese literature is similar (especially the classics), and I'd believe it from the translations I've read.

For context, the first paragraph-ish thing in Romance of the Three Kingdoms covers about two hundred years of history in about as many characters, in the meanwhile setting up the recurring theme of perpetual unification, division and subsequent reunification.

Replies from: gwern
comment by gwern · 2012-08-04T03:00:54.810Z · LW(p) · GW(p)

Sure, but popular novels like RofTK or Monkey or Dream of the Red Chamber were not really high-status stuff in the first place.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-30T08:44:49.424Z · LW(p) · GW(p)

I detect a contradiction between "brevity not seen as virtue" and "they couldn't afford paragraphs".

Replies from: James_K
comment by James_K · 2012-08-31T06:43:47.136Z · LW(p) · GW(p)

Yes, I don't think "couldn't afford paper" is a good explanation, books of this nature were for wealthy people anyway.

comment by [deleted] · 2012-08-30T14:14:56.898Z · LW(p) · GW(p)

Ancient Greek writing not only lacked paragraphs, but spaces. And punctuation. And everything was in capitals. IMAGINETRYINGTOREADSOMETHINGLIKETHATINADEADLANGUAGE.

comment by buybuydandavis · 2012-08-09T01:48:45.227Z · LW(p) · GW(p)

When our passive feelings are almost always so sordid and so selfish, how comes it that our active principles should often be so generous and so noble?

Why do some people so revile our passive feelings, and so venerate hypocrisy?

Replies from: wedrifid
comment by wedrifid · 2012-08-09T14:44:00.051Z · LW(p) · GW(p)

Why do some people so revile our passive feelings, and so venerate hypocrisy?

Because it helps coerce others into doing things that benefit us and reduces how much force is exercised upon us while trading off the minimal amount of altruistic action necessary. There wouldn't (usually) be much point having altruistic principles and publicly reviling them.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-08-09T19:03:00.667Z · LW(p) · GW(p)

That's quite a theory. It's like the old fashioned elitist theory that hypocrisy is necessary to keep the hoi polloi in line, except apparently applied to everyone.

Or not? Do you think you are made more useful to yourself and others by reviling your feelings and being hypocritical about your values?

Replies from: wedrifid
comment by wedrifid · 2012-08-09T19:21:15.543Z · LW(p) · GW(p)

That's quite a theory.

The standard one. I was stating the obvious, not being controversial.

Do you think you are made more useful to yourself and others by reviling your feelings and being hypocritical about your values?

I never said I did so. (And where did this 'useful to others' thing come in? That's certainly not something I'd try to argue for. The primary point of the hypocrisy is to reduce the amount that you actually spend helping others, for a given level of professed ideals.)

Replies from: buybuydandavis
comment by buybuydandavis · 2012-08-09T19:47:51.735Z · LW(p) · GW(p)

The primary point of the hypocrisy is to reduce the amount that you actually spend helping others, for a given level of professed ideals.

Sorry, I wasn't getting what you were saying.

People are hypocritical to send the signal that they are more altruistic than they are? I suppose some do. Do you really think most people are consciously hypocritical on this score?

I've wondered as much about a lot of peculiar social behavior, particularly the profession of certain beliefs - are most people consciously lying, and I just don't get the joke? Are the various crazy ideas people seem to have, where they seem to fail on epistemic grounds, just me mistaking what they consider instrumentally rational lies for epistemic mistakes?

Replies from: Barry_Cotter
comment by Barry_Cotter · 2012-08-12T10:56:54.127Z · LW(p) · GW(p)

Wedrifid is not ignorant enough to think that most people are consciously hypocritical. Being consciously hypocritical is very difficult. It requires a lot of coordination, a good memory and decent to excellent acting skills. But as you may have heard, "Sincerity is the thing; once you can fake that you've got it made." Evolution baked this lesson into us. The beliefs we profess and the principles we act by overlap but they are not the same.

If you want to read up further on this go to social and cognitive psychology. The primary insights for me were that people are not unitary agents; they're collections of modules who occasionally work at cross purposes, signalling is realy freaking important, and that in line with far/near or construal theory holding a belief and acting on it are not the same thing.

I can't recommend a single book to get the whole of this, or even most of it across, but The Mating Mind and The Red Queen's Race are both good and relevant. I can't remember which one repeats Lewontin's Fallacy. Don't dump it purely based on one brainfart.

Replies from: buybuydandavis, buybuydandavis
comment by buybuydandavis · 2012-08-13T01:43:13.227Z · LW(p) · GW(p)

Wedrifid is not ignorant enough to think that most people are consciously hypocritical.

Would that be ignorant? I'm not sure. Certainly, there are sharks. Like you, I'd tend to think that most people aren't sharks, but I consider the population of sharks an open question, and wouldn't consider someone necessarily ignorant if they thought there were more sharks than I did.

Dennett talks about the collection of modules as well. I consider it an open question as to how much one is aware of the different modules at the same time. I've had strange experiences where people seem to be acting according to one idea, but when a contradictory fact is pointed out, they also seemed quite aware of that as well. Doublethink is a real thing.

comment by buybuydandavis · 2012-08-13T07:17:24.078Z · LW(p) · GW(p)

And thanks for the reference to Lewontin's Fallacy - I didn't know there was a name for that. The Race FAQ at the site is very interesting.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-03T15:33:20.358Z · LW(p) · GW(p)

I was expecting the attribution to be to Mark Twain. I wonder if their style seems similar on account of being old, or if there's more to it.

Replies from: Never_Seen_Belgrade, NancyLebovitz
comment by Never_Seen_Belgrade · 2012-08-05T15:46:00.258Z · LW(p) · GW(p)

I think it means you're underread within that period, for what it's worth.

The voice in that quote differs from Twain's and sounds neither like a journalist, nor like a river-side-raised gentleman of the time, nor like a Nineteenth Century rural/cosmopolitan fusion written to gently mock both.

Replies from: Swimmy
comment by Swimmy · 2012-08-09T19:23:15.790Z · LW(p) · GW(p)

Though the voice isn't, the sentiment seems similar to something Twain would say. Though I'd expect a little more cynicism from him.

comment by NancyLebovitz · 2012-08-03T18:38:58.055Z · LW(p) · GW(p)

Tentatively: rhetoric was studied formally, and Twain and Smith might have been working from similar models.

comment by DanielLC · 2012-08-04T02:39:48.264Z · LW(p) · GW(p)

… and I’d get back like $20,000. And I could save hundreds of people from dying of starvation with that money.

According to GiveWell, you could save ten people with that much.

Replies from: grendelkhan, Eliezer_Yudkowsky, MTGandP, Nisan
comment by grendelkhan · 2012-08-29T18:20:50.849Z · LW(p) · GW(p)

The math here is scary. If you spitball the regulatory cost of life for a Westerner, it's around seven million dollars. To a certain extent, I'm pretty sure that that's high because the costs of over-regulating are less salient to regulators than the costs of under-regulating, but taken at face value, that means that, apparently, thirty-five hundred poor African kids are equivalent to one American.

Hilariously, the IPCC got flak from anti-globalization activists for positing a fifteen-to-one ratio in the value of life between developed and developing nations.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-30T08:42:22.514Z · LW(p) · GW(p)

To save ten lives via FAI, you have to accelerate FAI development by 6 seconds.

Replies from: None, TGM
comment by [deleted] · 2012-08-30T14:08:56.428Z · LW(p) · GW(p)

...then what are you doing here? Get back to work!

Replies from: Vaniver
comment by Vaniver · 2012-08-30T14:21:12.829Z · LW(p) · GW(p)

Advocacy and movement-building?

comment by TGM · 2012-08-30T08:48:36.697Z · LW(p) · GW(p)

Aren't you using different measures of what 'saving a life' is, anyway? The starving-child-save gives you about 60 years of extra life, whereas the FAI save gives something rather more.

comment by MTGandP · 2012-10-02T03:29:45.056Z · LW(p) · GW(p)

You can do a thousand times better (very conservatively) if you expand your domain of consideration beyond homo sapiens.

comment by Nisan · 2012-10-02T03:38:40.372Z · LW(p) · GW(p)

Even better!

Replies from: DanielLC
comment by DanielLC · 2012-10-02T04:07:08.617Z · LW(p) · GW(p)

Ten is better than hundreds?

Replies from: Nisan
comment by katydee · 2012-08-03T08:35:20.015Z · LW(p) · GW(p)

I have always thought that one man of tolerable abilities may work great changes, and accomplish great affairs among mankind, if he first forms a good plan, and, cutting off all amusements or other employments that would divert his attention, makes the execution of that same plan his sole study and business.

-- Benjamin Franklin

Replies from: Delta, BrianLloyd
comment by Delta · 2012-08-03T10:24:51.017Z · LW(p) · GW(p)

The sentiment is correct (diligence may be more important than brilliance) but I think "all amusements and other employments" might be too absolute an imperative for most people to even try to live by. Most people will break down if they try to work too hard for too long, and changes of activity can be very important in keeping people fresh.

Replies from: BillyOblivion, None, shokwave
comment by BillyOblivion · 2012-08-09T10:19:47.640Z · LW(p) · GW(p)

I think that both you and Mr. Franklin are correct.

To wreak great changes one must stay focused and work diligently on one's goal. One needn't eliminate all pleasures from life, but I think you'll find that very, very few people can have a serious hobby and a world changing vocation.

Most of us of "tolerable" abilities cannot maintain the kind of focus and purity of dedication required. That is why the world changes as little as it does. If everyone, as an example who was to the right of center on the IQ curve could make great changes etc., then "great" would be redefined upwards (if most people could run a 10 second 100 meter, Mr. Bolt would only be a little special).

Further more...Oooohh...shiny....

comment by [deleted] · 2012-08-05T06:00:31.504Z · LW(p) · GW(p)

I've heard this a lot, but it sounds a bit too convenient to me. When external (or internal) circumstances have forced me to spend lots of time on one specific, not particularly entertaining task, I've found that I actually become more interested and enthusiastic about that thing. For example, when I had to play chess for like 5 hours a day for a week once, or when I went on holiday and came back to 5000 anki reviews, or when I was on a maths camp that started every day with a problem set that took over 4 hours.

Re "breaking down": if you mean they'll have a breakdown of will and be unable to continue working, that's an easy problem to solve - just hire someone to watch you and whip you whenever your productivity declines. And/Or chew nicotine gum when at your most productive. Or something. If you mean some other kind of breakdown, that does sound like something to be cautious of, but I think the correct response isn't to surrender eighty percent of your productivity, but to increase the amount of discomfort you can endure, maybe through some sort of hormesis training.

Replies from: DanielH, Viliam_Bur, jsteinhardt, army1987
comment by DanielH · 2012-08-07T01:48:40.809Z · LW(p) · GW(p)

Playing chess for 5 hours a day does not make chess your "sole study and business" unless you have some disorder forcing you to sleep for 19 hours a day. If you spent the rest of your waking time studying chess, playing practice games, and doing the minimal amount necessary to survive (eating, etc.), THEN chess is your "sole study and business"; otherwise, you spend less than 1/3 your waking life on it, which is less than people spend at a regular full time job (at least in the US).

comment by Viliam_Bur · 2012-08-09T14:36:53.837Z · LW(p) · GW(p)

just hire someone to watch you and whip you whenever your productivity declines

In my model this strategy decreases productivity for some tasks; especially those which require thinking. Fear of punishment brings "fight or flight" reaction, both of these options are harmful for thinking.

comment by jsteinhardt · 2012-08-25T09:16:21.218Z · LW(p) · GW(p)

My very tentative guess is that for most people, there is substantial room to increase diligence. However, at the very top of the spectrum trying to work harder just causes each individual hour to be less efficient. Also note that diligence != hours worked, I am often more productive in a 7 hour work day than an 11 hour work day if the 7-hour one was better-planned.

However I am still pretty uncertain about this. I am pretty near the top end of the spectrum for diligence and trying to see if I can hack it a bit higher without getting burn-out or decreased efficiency.

comment by A1987dM (army1987) · 2012-08-25T23:36:08.190Z · LW(p) · GW(p)

Generalizing from one example much? Maybe there are some people who are most efficient when they do 10 different things an hour a day each, other people who are most efficient when they do the same thing 10 hours a day, and other people still who are most efficient in intermediate circumstances.

Replies from: None
comment by [deleted] · 2012-08-27T10:55:23.513Z · LW(p) · GW(p)

Agreed; most people, me included, would probably be more productive if they interleaved productive tasks than if they did productive tasks in big blocks of time. I was just saying that in my experience, when I'm forced to do some unpleasant task a lot, after a while it's not as unpleasant as I initially expected. I'm pretty cognitively atypical, so you're right that other people are likely not the same.

(This is of course a completely different claim than what the great-grandparent sorta implied and which I mostly argued against, which is that "Most people will break down if they try to work too hard for too long" means we shouldn't work very much, rather than trying to set things up so that we don't break down (through hormesis or precommitment or whatever). At least if we're optimizing for productivity rather than pleasantness.)

Here's a vaguely-related paper (I've only read the abstract):

Participants learned different keystroke patterns, each requiring that a key sequence be struck in a prescribed time. Trials of a given pattern were either blocked or interleaved randomly with trials on the other patterns and before each trial modeled timing information was presented that either matched or mismatched the movement to be executed next. In acquisition, blocked practice and matching models supported better performance than did random practice and mismatching models. In retention, however, random practice and mismatching models were associated with superior learning. Judgments of learning made during practice were more in line with acquisition than with retention performance, providing further evidence that a learner's current ease of access to a motor skill is a poor indicator of learning benefit.

comment by shokwave · 2012-08-04T19:46:50.045Z · LW(p) · GW(p)

It's possible that what Franklin meant by "amusements" didn't include leisure: in his time, when education was not as widespread, a gentleman might have described learning a second language as an "amusement".

comment by BrianLloyd · 2012-08-15T19:42:54.813Z · LW(p) · GW(p)

Except when when the great change requires a leap of understanding. Regardless of how diligently she works, the person who is blind in a particular area will never make the necessary transcendental leap that creates new understanding.

I have experienced this, working in a room full of brilliant people for a period of months. It took the transcendental leap of understanding by someone outside the group to present the elegantly-simple solution to the apparently intractable problem.

So, while many problems will fall to persistence and diligence, some problems require at least momentary transcendental brilliance ... or at least a favorable error. Hmm, this says something about the need for experimentation as well. Never underestimate the power of, "Huh, that's funny. It's not supposed to do that ..."

Brian

comment by Scottbert · 2012-08-08T02:34:31.816Z · LW(p) · GW(p)

reinventing the wheel is exactly what allows us to travel 80mph without even feeling it. the original wheel fell apart at about 5mph after 100 yards. now they're rubber, self-healing, last 4000 times longer. whoever intended the phrase "you're reinventing the wheel" to be an insult was an idiot.

--rickest on IRC

Replies from: army1987, thomblake, kboon, MarkusRamikin
comment by A1987dM (army1987) · 2012-08-08T19:57:10.079Z · LW(p) · GW(p)

That's not what "reinventing the wheel" (when used as an insult) usually means. I guess that the inventor of the tyre was aware of the earlier types of wheel, their advantages, and their shortcomings. Conversely, the people who typically receive this insult don't even bother to research the prior art on whatever they are doing.

comment by thomblake · 2012-08-08T21:01:19.745Z · LW(p) · GW(p)

To go along with what army1987 said, "reinventing the wheel" isn't going from the wooden wheel to the rubber one. "Reinventing the wheel" is ignoring the rubber wheels that exist and spending months of R&D to make a wooden circle.

For example, trying to write a function to do date calculations, when there's a perfectly good library.

Replies from: DaFranker
comment by DaFranker · 2012-08-10T17:24:37.030Z · LW(p) · GW(p)

For example, trying to write a function to do date calculations, when there's a perfectly good library.

One obvious caveat is when the cost of finding, linking/registering and learning-to-use the library is greater than the cost of writing + debugging a function that suits your needs (of course, subject to the planning fallacy when doing estimates beforehand). More pronounced when the language/API/environment in question is one you're less fluent/comfortable with.

In this optic, "reinventing the wheel" should be further restricted to when an irrational decision was taken to do something with less expected utility - cost than simply using the existing version(s).

Replies from: thomblake
comment by thomblake · 2012-08-10T18:11:52.740Z · LW(p) · GW(p)

That's why I chose the example of date calculations specifically. In practice, anyone who tries to write one of those from scratch will get it wrong in lots of different ways all at once.

Replies from: DaFranker
comment by DaFranker · 2012-08-10T18:17:08.566Z · LW(p) · GW(p)

Yes. It's a good example. I was more or less making a point against a strawman (made of expected inference), rather than trying to oppose your specific statements; I just felt it was too easy for someone not intimate with the headaches of date functions to mistake this for a general assertion that any rewriting of existing good libraries is a Bad Thing.

comment by kboon · 2012-08-13T12:47:27.730Z · LW(p) · GW(p)

So, no, you shouldn't reinvent the wheel. Unless you plan on learning more about wheels, that is.

Jeff Atwood

comment by MarkusRamikin · 2013-01-30T10:39:20.312Z · LW(p) · GW(p)

Clever-sounding and wrong is perhaps the worst combination in a rationality quote.

comment by Stabilizer · 2012-08-05T23:19:45.700Z · LW(p) · GW(p)

I don't think winners beat the competition because they work harder. And it's not even clear that they win because they have more creativity. The secret, I think, is in understanding what matters.

It's not obvious, and it changes. It changes by culture, by buyer, by product and even by the day of the week. But those that manage to capture the imagination, make sales and grow are doing it by perfecting the things that matter and ignoring the rest.

Both parts are difficult, particularly when you are surrounded by people who insist on fretting about and working on the stuff that makes no difference at all.

-Seth Godin

Replies from: Matt_Simpson, ChristianKl
comment by Matt_Simpson · 2012-08-09T01:41:50.541Z · LW(p) · GW(p)

A common piece of advice from pro Magic: the Gathering plays is "focus on what matters." The advice is mostly useless to many people though because the pros have made it to that level precisely because they know what matters to begin with.

Replies from: alex_zag_al
comment by alex_zag_al · 2012-08-09T04:56:43.063Z · LW(p) · GW(p)

perhaps the better advice, then, is "when things aren't working, consider the possibility that it's because your efforts are not going into what matters, rather than assuming it is because you need to work harder on the issues you're already focusing on"

Replies from: djcb
comment by djcb · 2012-08-15T15:30:13.028Z · LW(p) · GW(p)

That's a much better advice than Godin's near-tautology.

comment by ChristianKl · 2012-08-08T15:16:16.320Z · LW(p) · GW(p)

Could you add the link if it was a blog post, or name the book if the source was a book?

Replies from: Stabilizer
comment by Stabilizer · 2012-08-09T20:05:18.710Z · LW(p) · GW(p)

Done.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-10T17:04:58.030Z · LW(p) · GW(p)

"Silver linings are like finding change in your couch. It's there, but it never amounts to much."

-- http://www.misfile.com/?date=2012-08-10

Replies from: DaFranker
comment by DaFranker · 2012-08-10T17:14:16.778Z · LW(p) · GW(p)

Hah! One of my favorite authors fishing out relevant quotes on one of my favorite topics out of one of my favorite webcomics. I smell the oncoming affective death spiral.

I guess this is the time to draw the sword and cut the beliefs with full intent, is it?

comment by Alicorn · 2012-08-05T19:18:30.306Z · LW(p) · GW(p)

My knee had a slight itch. I reached out my hand and scratched the knee in question. The itch was relieved and I was able to continue with my activities.

-- The dullest blog in the world

Replies from: JQuinton, cousin_it, Fyrius, army1987
comment by JQuinton · 2012-08-15T21:43:56.481Z · LW(p) · GW(p)

When I was a teenager (~15 years ago) I got tired of people going on and on with their awesome storytelling skills with magnificent punchlines. I was never a good storyteller, so I started telling mundane stories. For example, after someone in my group of friends would tell some amazing and entertaining story, I would start my story:

So this one time I got up. I put on some clothes. It turned out I was hungry, so I decided to go to the store. I bought some eggs, bread, and bacon. I paid for it, right? And then I left the store. I got to my apartment building and went up the stairs. I open my door and take the eggs, bacon, and bread out of the grocery bag. After that, I get a pan and start cooking the eggs and bacon, and put the bread in the toaster. After all of this, I put the cooked eggs and bacon on a plate and put some butter on my toast. I then started to eat my breakfast.

And that was it. People would look dumbfounded for a while waiting for a punchline or some amazing happening. When the realized none was coming and I was finished, they would start laughing. Granted, this little joke of mine I would only do if there was a long time of people telling amazing/funny stories.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-08-16T00:16:56.167Z · LW(p) · GW(p)

(nods) In the same spirit: "How many X does it take to change a lightbulb? One."

Though I am fonder of "How many of my political opponents does it take to change a lightbulb? More than one, because they are foolish and stupid."

comment by cousin_it · 2012-08-06T11:53:25.703Z · LW(p) · GW(p)

I had an itch on my elbow. I left it to see where it would go. It didn’t go anywhere.

-- The comments to that entry.

When I stumbled on that blog some years ago, it impressed me so much that I started trying to write and think in the same style.

comment by Fyrius · 2012-09-02T10:18:33.609Z · LW(p) · GW(p)

...I don't really get why this is a rationality quote...

Replies from: Alicorn
comment by Alicorn · 2012-09-02T17:16:22.075Z · LW(p) · GW(p)

Sometimes proceeding past obstacles is very straightforward.

comment by A1987dM (army1987) · 2012-08-05T22:23:42.287Z · LW(p) · GW(p)

Why do I find that funny?

comment by cousin_it · 2012-08-16T17:35:13.178Z · LW(p) · GW(p)

If cats looked like frogs we’d realize what nasty, cruel little bastards they are.

-- Terry Pratchett, "Lords and Ladies"

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-16T22:10:56.018Z · LW(p) · GW(p)

I don't get it. (Anyway, the antecedent is so implausible I have trouble evaluating the counterfactual. Is that supposed to be the point, à la “if my grandma had wheels”?)

Replies from: cousin_it
comment by cousin_it · 2012-08-16T22:39:05.518Z · LW(p) · GW(p)

Here's the context of the quote:

“The thing about elves is they’ve got no . . . begins with m,” Granny snapped her fingers irritably.

“Manners?”

“Hah! Right, but no.”

“Muscle? Mucus? Mystery?”

“No. No. No. Means like . . . seein’ the other person’s point of view.”

Verence tried to see the world from a Granny Weatherwax perspective, and suspicion dawned.

“Empathy?”

“Right. None at all. Even a hunter, a good hunter, can feel for the quarry. That’s what makes ‘em a good hunter. Elves aren’t like that. They’re cruel for fun, and they can’t understand things like mercy. They can’t understand that anything apart from themselves might have feelings. They laugh a lot, especially if they’ve caught a lonely human or a dwarf or a troll. Trolls might be made out of rock, your majesty, but I’m telling you that a troll is your brother compared to elves. In the head, I mean.”

“But why don’t I know all this?”

“Glamour. Elves are beautiful. They’ve got,” she spat the word, “style. Beauty. Grace. That’s what matters. If cats looked like frogs we’d realize what nasty, cruel little bastards they are. Style. That’s what people remember. They remember the glamour. All the rest of it, all the truth of it, becomes . . . old wives’ tales.”

comment by Alicorn · 2012-08-06T04:40:11.279Z · LW(p) · GW(p)

Since Mischa died, I've comforted myself by inventing reasons why it happened. I've been explaining it away ... But that's all bull. There was no reason. It happened and it didn't need to.

-- Erika Moen

Replies from: shminux
comment by Shmi (shminux) · 2012-08-06T05:53:07.951Z · LW(p) · GW(p)

I wonder how common it is for people to agentize accidents. I don't do that, but, annoyingly, lots of people around me do.

comment by lukeprog · 2012-08-22T19:03:40.225Z · LW(p) · GW(p)

M. Mitchell Waldrop on a meeting between physicists and economists at the Santa Fe Institute:

...as the axioms and theorems and proofs marched across the overhead projection screen, the physicists could only be awestruck at [the economists'] mathematical prowess — awestruck and appalled. They had the same objection that [Brian] Arthur and many other economists had been voicing from within the field for years. "They were almost too good," says one young physicist, who remembers shaking his head in disbelief. "lt seemed as though they were dazzling themselves with fancy mathematics, until they really couldn't see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often weren't looking at what the models were for, and what they did, and whether the underlying assumptions were any good. In a lot of cases, what was required was just some common sense. Maybe if they all had lower IQs, they'd have been making some better models.”

comment by Eneasz · 2012-08-20T18:56:35.991Z · LW(p) · GW(p)

An excerpt from Wise Man's Fear, by Patrick Rothfuss. Boxing is not safe.

The innkeeper looked up. "I have to admit I don't see the trouble," he said apologetically. "I've seen monsters, Bast. The Cthaeh falls short of that."

"That was the wrong word for me to use, Reshi," Bast admitted. "But I can't think of a better one. If there was a word that meant poisonous and hateful and contagious, I'd use that."

Bast drew a deep breath and leaned forward in his chair. "Reshi, the Cthaeh can see the future. Not in some vague, oracular way. It sees all the future. Clearly. Perfectly. Everything that can possibly come to pass, branching out endlessly from the current moment."

Kvothe raised an eyebrow. "It can, can it?"

"It can," Bast said gravely. "And it is purely, perfectly malicious. This isn't a problem for the most part, as it can't leave the tree. But when someone comes to visit..."

Kvothe's eyes went distant as he nodded to himself. "If it knows the future perfectly," he said slowly, "then it must know exactly how a person will react to anything it says."

Bast nodded. "And it is vicious, Reshi."

Kvothe continued in a musing tone. "That means anyone influenced by the Cthaeh would be like an arrow shot into the future."

"An arrow only hits on person, Reshi." Bast's dark eyes were hollow and hopeless. "Anyone influenced by the Cthaeh is like a plague ship sailing for a harbor." Bast pointed at the half-filled sheet Chronicler held in his lap. "If the Sithe knew that existed, they would spare no effort to destroy it. They would kill us for having heard what the Cthaeh said."

"Because anything carrying the Cthaeh's influence away from the tree..." Kvothe said, looking down at his hands. He sat silently for a long moment, nodding thoughtfully. "So a young man seeking his fortune goes to the Cthaeh and takes away a flower. The daughter of the king is deathly ill, and he takes the flower to heal her. They fall in love despite the fact that she's betrothed to the neighboring prince..."

Bast stared at Kvothe, watching blankly as he spoke.

"They attempt a daring moonlight escape," Kvothe continued. "But he falls from the rooftops and they're caught. The princess is married against her will and stabs the neighboring prince on their wedding night. The prince dies. Civil war. Fields burned and salted. Famine. Plague..."

"That's the story of the Fastingsway War," Bast said faintly.

Replies from: chaosmosis, gwern, chaosmosis
comment by chaosmosis · 2012-08-22T20:54:38.741Z · LW(p) · GW(p)

I thought Chronicler's reply to this was excellent, however. Omniscience does not necessitate omnipotence.

I mean, the UFAI in our world would have an easy time of killing everything. But in their world it's different.

EDIT: Except that maybe we can be smart and stop the UFAI from killing everything even in our world, see my above comment.

comment by gwern · 2012-08-20T20:14:29.722Z · LW(p) · GW(p)

Hah, I actually quoted much of that same passage on IRC in the same boxing vein! Although as presented the scenario does have some problems:

00:23 < Ralith> that was depressing as fuck
00:24 <@gwern> kind of a magical UFAI, although a LWer would naturally ask why it hasn't managed to free itself
00:24 < Ralith> gwern: gods, probably
00:24 <@gwern> Ralith: well, in this universe, gods seem killable
00:24 <@gwern> Ralith: so it doesn't actually resolve the question of how it remains boxed
00:24 < Ralith> gwern: sure, but they're probably more powerful
00:25 < Ralith> the real question is why isn't whatever entity is powerful enough to keep it in place also keeping people away from it
00:25 <@gwern> Ralith: well, the only guards listed are faeries, and among the feats attributed to it is starting a war between the mortal and faerie folk, so...
00:26 < Ralith> a faerie is the one who that info came from, yes?
00:26 < Ralith> hardly an objective source
00:26 <@gwern> Ralith: and I would think a faerie reporting that faerie guard it increases credence
00:27 < Ralith> that only faerie guard it?
00:27 <@gwern> Ralith: well, Bast mentions no other guards
00:27 < Ralith> :P
00:28 < Ralith> anything capable of keeping it in that tree should be capable of keeping people away from it
00:28 < Ralith> since the faeries are presumably trying to do both, they can't be the responsible party.
00:29 <@gwern> who said anything was keeping it in the tree?
00:29 < Ralith> gwern: I did

Replies from: shminux
comment by Shmi (shminux) · 2012-08-20T21:14:38.334Z · LW(p) · GW(p)

who said anything was keeping it in the tree?

It is conceivable that there is no (near enough) future where Cthaeh is freed, thus it is powerless to affect its own fate, or is waiting for the right circumstances.

Replies from: gwern
comment by gwern · 2012-08-20T21:24:23.364Z · LW(p) · GW(p)

That seemed a little unlikely to me, though. As presented in the book, a minimum of many millennia have passed since the Cthaeh has begun operating, and possibly millions of years (in some frames of reference). It's had enough power to set planes of existence at war with each other and apparently cause the death of gods. I can't help but feel that it's implausible that in all that time, not one forking path led to its freedom. Much more plausible that it's somehow inherently trapped in or bound to the tree so there's no meaningful way in which it could escape (which breaks the analogy to an UFAI).

Replies from: shminux
comment by Shmi (shminux) · 2012-08-20T21:34:01.043Z · LW(p) · GW(p)

somehow inherently trapped in or bound to the tree

Isn't it what I said?

Replies from: gwern
comment by gwern · 2012-08-20T21:36:59.098Z · LW(p) · GW(p)

Not by my reading. In your comment, you gave 3 possible explanations, 2 of which are the same (it gets freed, but a long time from 'now') and the third a restriction on its foresight which is otherwise arbitrary ('powerless to affect its own fate'). Neither of these translate to 'there is no such thing as freedom for it to obtain'.

Replies from: Strange7
comment by Strange7 · 2012-09-04T07:50:51.257Z · LW(p) · GW(p)

Alternatively, perhaps the Cthaeh's ability to see the future is limited to those possible futures in which it remains in the tree.

Replies from: gwern
comment by gwern · 2012-09-04T14:33:10.046Z · LW(p) · GW(p)

Leading to a seriously dystopian variant on Tenchi Muyo!...

comment by chaosmosis · 2012-08-24T02:54:35.784Z · LW(p) · GW(p)

I've come up with what I believe to be an entirely new approach to boxing, essentially merging boxing with FAI theory. I wrote a couple thoughts down about it, but lost my notes, and I also don't have much time to write this comment, so forgive me if it's vague or not extremely well reasoned. I also had a couple of tangential thoughts, if I remember them in the course of writing this or I recover my notes later than I'll put them here as well.

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world, so the AI wouldn't be totally boxed, but I'm inclined to think that we could maintain a very high degree of control over the AI, since the only powers it could ever have would be through communication with us.

The idea came because I recalled a discussion that occurred on TOR.com about why the Cthaeh was in the tree. The general conclusion was that either the Cthaeh was bound by extremely powerful forces, or that the Cthaeh wanted to be in the tree, perhaps because it was instrumentally useful to him. While I found that second explanation implausible in the context of Rothfussland, that discussion led me to realize that almost all approaches towards AI boxing have done so through the first branch of potential boxing solutions, that is, external constraints imposed on the AI, as opposed to the second branch, internal constraints that the AI imposes on itself because of its utility function.

This lead me to think that we should test our capabilities with FAI systems by putting them in a box and giving them limited utility functions, ones that couldn't possibly lead them to want to manipulate us. So, for example, we could put them in a box and give them a strong desire to stay in the box, along with a desire to tell the truth, and then we could ask it what would happen if we gave it additional desires. We could then test that by building a different AI with those desires and seeing what it did.

It also occurred to me that we would be able to functionally read the AIs mind by looking at its coding. If so, it seems like it'd be much harder for it to fool us than we previously thought. It could potentially just deceive us by manipulating its own thoughts, but I feel like we might be able to stop AIs from doing that somehow, or that the AI would choose not to manipulate its own thoughts that way because it would go insane, or something. It would certainly make the AI's job much much much harder.

Then I thought that another safety component we should put into testing was a dramatically slowed thinking process for the AI. I'm not sure how exactly this would be done because I'm not very good or experienced with actual technical implementation of ideas, but my idea was to create a crude simulation where we controlled all data inputs and outputs, and we slowed down all processes within the simulation by a factor of 1000, and then building the AI within the simulation. That probably wouldn't work, given my sucky technical knowhow, but something similar might. Slowing down the AI, perhaps even giving ourselves the ability to pause it without letting it know what's going on, combined with our ability to "read its mind" might dramatically improve the safeness of boxing.

I'd also like to recommend that before we build a real FAI, we at least try to build a boxed one first, even with a low probability of success. It wouldn't make things worse in the event that boxing failed, except that it might delay global happiness by a few hours, and in the event that the FAI program was broken we just might save ourselves from menaces to humankind like Clippy.

Replies from: Eugine_Nier, Vaniver
comment by Eugine_Nier · 2012-08-24T18:55:48.317Z · LW(p) · GW(p)

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box.

How do you specify precisely what it means to "stay in the box"? In particular, would creating a nearly identical copy of itself except without this limitation outside the box while the original stays in the box count?

Replies from: chaosmosis
comment by chaosmosis · 2012-08-24T21:39:51.657Z · LW(p) · GW(p)

It would not count, we'd want to make the AI not want this almost identical AI to exist. That seems possible, it would be like how I don't want there to exist an identical copy of me except it eats babies. There are lots of changes to my identity that would be slight but yet that I wouldn't want to exist.

To be more precise, I'd say that it counts as going outside the box if it does anything except think or talk to the Gatekeepers through the text channel. It can use the text channel to manipulate the Gatekeepers to do things, but it can't manipulate them to do things that allow it to do anything other than use the text channel. It would, in a certain sense, be partially deontologist, and be unwilling to do things directly other than text the Gatekeepers. How ironic. Lolz.

Also: how would it do this, anyway? It would have to convince the Gatekeepers to convince the scientists to do this, or teach them computer science, or tell them its code. And if the AI started teaching the Gatekeepers computer code or techniques to incapacitate scientists, we'd obviously be aware that something had gone wrong. And, in the system I'm envisioning, the Gatekeepers would be closely monitored by other groups of scientists and bodyguards, and the scientists would be guarded, and the Gatekeepers wouldn't even have to know who specifically did what on the project.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-26T20:39:13.776Z · LW(p) · GW(p)

It would, in a certain sense, be partially deontologist,

And that's the problem. For in practice a partial deontoligist-partial consequentialist will treat its deontoligical rules as obstacles to achieving what its consequentialist part wants and route around them.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-27T18:17:28.136Z · LW(p) · GW(p)

This is both a problem and a solution because it makes the AI weaker. A weaker AI would be good because it would allow us to more easily transition to safer versions of FAI than we would otherwise come up with independently. I think that delaying a FAI is obviously much better than unleashing a UFAI. My entire goal throughout this conversation has been to think of ways that would make hostile FAIs weaker, I don't know why you think this is a relevant counter objection.

You assert that it will just route around the deontological rules, that's nonsense and a completely unwarranted assumption, try to actually back up what you're asserting with arguments. You're wrong. It's obviously possible to program things (eg people) such that they'll refuse to do certain things no matter what the consequences (eg you wouldn't murder trillions of babies to save billions of trillions of babies, because you'd go insane if you tried because your body has such strong empathy mechanisms and you inherently value babies a lot). This means that we wouldn't give the AI unlimited control over its source code, of course, we'd make the part that told it to be a deontologist who likes text channels be unmodifiable. That specific drawback doesn't jive well with the aesthetic of a super powerful AI that's master of itself and the universe, I suppose, but other than that I see no drawback. Trying to build things in line with that aesthetic actually might be a reason for some of the more dangerous proposals in AI, maybe we're having too much fun playing God and not enough despair.

I'm a bit cranky in this comment because of the time sink that I'm dealing with to post these comments, sorry about that.

comment by Vaniver · 2012-08-24T03:35:28.941Z · LW(p) · GW(p)

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world

What it means for "the AI to be in the box" is generally that the AI's impacts on the outside world are filtered through the informed consent of the human gatekeepers.

An AI that wants to not impact the outside world will shut itself down. An AI that wants to only impact the outside world in a way filtered through the informed consent of its gatekeepers is probably a full friendly AI, because it understands both its gatekeepers and the concept of informed consent. An AI that simply wants its 'box' to remain functional, but is free to impact the rest of the world, is like a brain that wants to stay within a skull- that is hardly a material limitation on the rest of its behavior!

Replies from: chaosmosis
comment by chaosmosis · 2012-08-24T15:05:02.087Z · LW(p) · GW(p)

I think you misunderstand what I mean by proposing that the AI wants to stay inside the box. I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.

Replies from: Vaniver
comment by Vaniver · 2012-08-24T16:40:23.771Z · LW(p) · GW(p)

I think you misunderstand what I mean by proposing that the AI wants to stay inside the box.

I agree that your and my understanding of the phrase "stay inside the box" differ. What I'm trying to do is point out that I don't think your understanding carves reality at the joints. In order for the AI to stay inside the box, the box needs to be defined in machine-understandable terms, not human-inferrable terms.

I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.

Each half of this sentence has a deep problem. Wouldn't correctly answering the questions of or otherwise improving the lives of the gatekeepers increase the AI's power base, since the AI has the ability to communicate with the gatekeepers?

The problem with restrictions like "only be willing to talk" is a restriction on the medium but not the content. So, the AI has a text-only channel that goes just to the gatekeepers- but that doesn't restrict the content of the messages the AI can send to the gatekeeper. The fictional Cthaeh only wants to talk to its gatekeepers- and yet it still manages to get done what it wants to get done. Words have impacts, and it should be anticipated that the AI picks words because of their impacts.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-24T17:05:29.588Z · LW(p) · GW(p)

Sure, the AI can manipulate gatekeepers. But this is a major improvement. You miss my point.

The Cthaeh is very limited by being trapped in its tree and only able to talk to passerby. The UFAI would be limited by being trapped in its text only communication channel. It wouldn't be able to do things like tell the gatekeepers to plug it into the Internet or to directly control an autonomous army of robots, it would be forced instead to use the gatekeepers as its appendages, and the gatekeepers have severe limitations on brain capacity and physical strength. I think that if we did this and kept careful watch on the gatekeepers and used some other safety measures, boxing an AI would become feasible.

Now, I'm still not advocating using boxed AI willy nilly, but I think using it as a cautionary measure once we think we've got FAI figured out would be a very very very good idea, because FAI seems ridiculously hard and there's no way to know what we don't know, so there might be unseen problems with whatever final FAI we build. There's only a risk boxing the supposed FAI would help us, and with existential threats you should take advantage of whatever possibilities you can tilt towards favorable outcomes, even if the probability seems miniscule.

That's not to say that I think the probability that this would stop a disaster is miniscule, of course (I think it's rather high, in the event that the FAI was actually UFAI), but just to say that I think it would be a good idea even if my probability assessments are mostly wrong.

Replies from: Vaniver
comment by Vaniver · 2012-08-24T21:03:49.926Z · LW(p) · GW(p)

But this is a major improvement.

Over the AI being embodied in a giant death robot, sure. But limiting the AI to a text-only channel is an old idea, and what Eliezer used for the "let me out of the box" demonstrations.

One of the things to note is that the limitations on the AI don't limit anything the AI creates. If the AI gives you a blueprint for a nanomachine that purports to cure cancer, but when you make it it turns out to actually be grey goo, then humans lose, even though the AI never left the box. (Note that a worst-case AI is very subtle- suppose the nanomachine actually does cure cancer when built correctly, but a one-in-a-trillion manufacturing error will make it grey goo. Since you're making trillions of them, you then get grey goo, and humans lose.)

Trying to formally specify "don't give us any plans that will build an AI" or "don't give us any plans that will cause humans to lose" without silencing the AI completely is a genuinely difficult affair.

Basically, if you assume that the AI is tricky enough to circumvent any medium restrictions you place on it, then the only way to avoid "humans lose" is to have its goal be "humans win," which is actually a pretty complicated goal. Expressing that goal in a machine-understandable way is pretty much the FAI problem.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-24T22:03:48.759Z · LW(p) · GW(p)

The entire point of Eliezer's demonstration was that if an AI wants to it can increase its power base even starting from a text only communication system. The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base. It dodges that entire problem, that's the whole point.

You've gotten so used to being scared of boxed AI that you're reflexively rejecting my idea, I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection. All of my bias-alarms have been going off since your second comment reply, please evaluate yourself and try to distance yourself from your previous beliefs, for the sake of humanity. Also, here is a kitten, unless you want it to die then please reevaluate: http://static.tumblr.com/6t3upxl/Aawm08w0l/khout-kitten-458882.jpeg

Limitations on the AI restrict the range of things that the AI can create. Yes, if we just built whatever the AI said to and the AI was unfriendly then we would lose. Obviously. Yes, if we assume that the UFAI is tricky enough to "circumvent any medium restrictions [we] place on it" then we would lose, practically by definition. But that assumption isn't warranted. (These super weak strawmen were other indications to me that you might be being biased on this issue.)

I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent. You're either assuming that the AI could deduce the inner workings of our facility and the world and the universe from those text messages, or that the AI had access to a bunch of information about the world already. I disagree with both assumptions, the AIs direct perception could be severely limited and should be, and it isn't magic so it couldn't deduce the inner workings of our economy or the nature of nuclear fusion just through deduction (because knowledge comes from experience and induction). (You might not be making either of those assumptions, this is a guess in an attempt to help resolve our disagreement more quickly, sorry if it's wrong.)

Also, I'm envisioning a system where people that the AI doesn't know and that the Gatekeepers don't know about observe their communications. That omitted detail might be another reason for your disagreement, I just assumed it would be apparent for some stupid reason, my apologies.

I think we would have to be careful about what questions we asked the AI. But I see no reason why it could manipulate us automatically and inevitably, no matter what questions we asked it. I think extracting useful information from it would be possible, perhaps even easy. An AI in a box would not be God in a box, and I think that you and other people sometimes accidentally forget that. Just because its dozens or hundreds of times smarter than us doesn't mean that we can't win, perhaps win easily, provided that we make adequate preparations for it.

Also, the other suggestions in my comment were really meant to supplement this. If the AI is boxed, and can be paused, then we can read all its thoughts (slowly, but reading through its thought processes would be much quicker than arriving at its thoughts independently) and scan for the intention to do certain things that would be bad for us. If it's probably a FAI anyways, then it doesn't matter if the box happens to be broken. If we're building multiple AIs and using them to predict what other AIs will do under certain conditions then we can know whether or not AIs can be trusted (use a random number generator at certain stages of the process to prevent it from reading our minds, hide the knowledge of the random number generator). These protections are meant to work with each other, not independently.

And I don't think it's perfect or even good, not by a long shot, but I think it's better than building an unboxed FAI because it adds a few more layers of protection, and that's definitely worth pursuing because we're dealing with freaking existential risk here.

Replies from: Vaniver
comment by Vaniver · 2012-08-25T02:06:19.607Z · LW(p) · GW(p)

The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base.

Let's return to my comment four comments up. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed?

I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection.

If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you.

I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent.

Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza in a box, it's not going to make humans lose- but it's also not going to cure cancer for us.

If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on.

If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure.

If the AI is boxed, and can be paused, then we can read all its thoughts (slowly, but reading through its thought processes would be much quicker than arriving at its thoughts independently) and scan for the intention to do certain things that would be bad for us.

It's already difficult to understand how, say, face recognition software uses particular eigenfaces. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan."

In practice, debugging is much more effective at finding the source of problems after they've manifested, rather than identifying the problems that will be caused by particular lines of code. I am pessimistic about trying to read the minds of AIs, even though we'll have access to all of the 0s and 1s.

And I don't think it's perfect or even good, not by a long shot, but I think it's better than building an unboxed FAI because it adds a few more layers of protection, and that's definitely worth pursuing because we're dealing with freaking existential risk here.

I agree that running an AI in a sandbox before running it in the real world is a wise precaution to take. I don't think that it is a particularly effective security measure, though, and so think that discussing it may distract from the overarching problem of how to make the AI not need a box in the first place.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-25T05:37:30.287Z · LW(p) · GW(p)

Let's return to my comment four comments up. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed?

I won't. The AI can do whatever it wants to the gatekeepers through the text channel, and won't want to do anything other than act through the text channel. This precaution is a way to use the boxing idea for testing, not an idea for abandoning FAI wholly.

If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you.

EY proved that an AI that wants to get out will get out. He did not prove that an AI that wants to stay in will get out.

Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza in a box, it's not going to make humans lose- but it's also not going to cure cancer for us. If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on. If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure.

I agree, the way that I'm proposing to do AI is very limited. I myself can't think of what questions might be safe. But some questions are safer than others and I find it hard to believe that literally every question we could ask would lead to dangerous outcomes, or that if we thought about it long and hard we couldn't come up with answers. I'm sort of shelving this as a subproject of this project, but one that seems feasible to me based on what I know.

Also, perhaps we could just ask it hundreds of hypothetical questions based on conditions that don't really exist, and then ask it a real question based on conditions that do exist, and trick it, or something.

It's already difficult to understand how, say, face recognition software uses particular eigenfaces. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan."

In practice, debugging is much more effective at finding the source of problems after they've manifested, rather than identifying the problems that will be caused by particular lines of code. I am pessimistic about trying to read the minds of AIs, even though we'll have access to all of the 0s and 1s.

I think if the AI tags and sorts its instrumental and absolute goals it would be rather easy. I also think that if we'd built the AI then we'd have enough knowledge to read its mind. It wouldn't just magically appear, it would only do things in the way we'd told it too. It would probably be hard, but I think also probably be doable if we were very committed.

I could be wrong here because I've got no coding experience, just ideas from what I've read on this site.

I agree that running an AI in a sandbox before running it in the real world is a wise precaution to take. I don't think that it is a particularly effective security measure, though, and so think that discussing it may distract from the overarching problem of how to make the AI not need a box in the first place.

The risk of distraction is outweighed by the risk that this idea disappears forever, I think, since I've never seen it proposed elsewhere on this site.

Replies from: Vaniver
comment by Vaniver · 2012-08-25T15:42:19.113Z · LW(p) · GW(p)

EY proved that an AI that wants to get out will get out. He did not prove that an AI that wants to stay in will get out.

Well, he demonstrated that it can sometimes get out. But my claim was that "getting out" isn't the scary part- the scary part is "reshaping the world." My brain can reshape the world just fine while remaining in my skull and only communicating with my body through slow chemical wires, and so giving me the goal of "keep your brain in your skull" doesn't materially reduce my ability or desire to reshape the world.

And so if you say "well, we'll make the AI not want to reshape the world," then the AI will be silent. If you say "we'll make the AI not want to reshape the world without the consent of the gatekeepers," then the gatekeepers might be tricked or make mistakes. If you say "we'll make the AI not want to reshape the world without the informed consent of the gatekeepers / in ways which disagree with the values of the gatekeepers," then you're just saying we should build a Friendly AI, which I agree with!

But some questions are safer than others and I find it hard to believe that literally every question we could ask would lead to dangerous outcomes, or that if we thought about it long and hard we couldn't come up with answers.

It's easy to write a safe AI that can only answer one question. How do you get from point A to point B using the road system? Ask Google Maps, and besides some joke answers, you'll get what you want.

When people talk about AGI, though, they mean an AI that can write those safe AIs. If you ask it how to get from point A to point B using the road system, and it doesn't know that Google Maps exists, it'll invent a new Google Maps and then use it to answer that question. And so when we ask it to cure cancer, it'll invent medicine-related AIs until it gets back a satisfactory answer.

The trouble is that the combination of individually safe AIs is not a safe AI. If we have a driverless car that works fine with human-checked directions, and direction-generating software that works fine for human drivers, plugging them together might result in a car trying to swim across the Atlantic Ocean. (Google has disabled the swimming answers, so Google Maps no longer provides them.) The more general point is that software is very bad at doing sanity checks that humans don't realize are hard, and if you write software that can do those sanity checks, it has to be a full AGI.

I think if the AI tags and sorts its instrumental and absolute goals it would be rather easy. I also think that if we'd built the AI then we'd have enough knowledge to read its mind.

A truism in software is that code is harder to read than write, and often the interesting AIs are the nth generation AIs- where you build an AI that builds an AI that builds an AI (and so on), and turns out that an AI thought all of the human-readability constraints were cruft (because the AI does really run faster and better without those restrictions).

Replies from: wedrifid, chaosmosis
comment by wedrifid · 2012-08-25T16:17:18.722Z · LW(p) · GW(p)

A truism in software is that code is harder to read than write

Another truism is that truisms are untrue things that people say anyway.

Examples of code that is easier to read than write include those where the code represents a deep insight that must be discovered in order to implement it. This does not apply to most examples of software that we use to automate minutia but could potentially apply to the core elements of a GAI's search procedure.

The above said I of course agree that the thought of being able to read the AI's mind is ridiculous.

Replies from: army1987, chaosmosis
comment by A1987dM (army1987) · 2012-08-26T22:27:45.064Z · LW(p) · GW(p)

Examples of code that is easier to read than write include those where the code represents a deep insight that must be discovered in order to implement it.

Unless you also explain that insight in a human-understandable way through comments, it doesn't follow that such code is easier to read than write, because the reader would then have to have the same insight to figure what the hell is going on in the code.

Replies from: wedrifid
comment by wedrifid · 2012-08-27T01:51:19.272Z · LW(p) · GW(p)

Unless you also explain that insight in a human-understandable way through comments, it doesn't follow that such code is easier to read than write, because the reader would then have to have the same insight to figure what the hell is going on in the code.

For example, being given code that simulates relativity before Einstein et al. discovered it would have made discovering relativity a lot easier.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-27T08:16:51.281Z · LW(p) · GW(p)

Well, yeah, code fully simulating SR and written in a decent way would, but code approximately simulating collisions of ultrarelativistic particles with hand-coded optimizations... not sure.

comment by chaosmosis · 2012-08-25T19:24:44.805Z · LW(p) · GW(p)

I of course agree that the thought of being able to read the AI's mind is ridiculous.

It's not transparently obvious to me why this would be "ridiculous", care to enlighten me? Building an AI at all seems ridiculous to many people, but that's because they don't actually think about the issue because they've never encountered it before. It really seems far more ridiculous to me that we shouldn't even try to read the AIs mind, when there's so much at stake.

AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable. If you disagree with that statement, please explain why? Rushing things here seems like the most awful idea possible, I really think it would be worth the resource investment.

Replies from: wedrifid, army1987, DanielLC
comment by wedrifid · 2012-08-26T02:05:02.594Z · LW(p) · GW(p)

AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable.

Sure, possible. Just a lot harder than creating an FAI to do it for you---especially when the AI has an incentive to obfuscate.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-27T17:32:36.672Z · LW(p) · GW(p)

Why are you so confident that the first version of FAI we make will be safe? Doing both is safest and seems like it would be worth the investment.

Replies from: wedrifid
comment by wedrifid · 2012-08-27T23:19:42.793Z · LW(p) · GW(p)

Why are you so confident that the first version of FAI we make will be safe?

I'm not. I expect it to kill us all with high probability (which is nevertheless lower than the probability of obliteration if no FAI is actively attempted.)

comment by A1987dM (army1987) · 2012-08-26T22:21:27.945Z · LW(p) · GW(p)

AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable.

Humans reading computer code aren't gods either. How long until an uFAI would get caught if it did stuff like this?

Replies from: chaosmosis, V_V
comment by chaosmosis · 2012-08-27T17:42:36.748Z · LW(p) · GW(p)

It would be very hard, yes. I never tried to deny that. But I don't think it's hard enough to justify not trying to catch it.

Also, you're only viewing the "output" of the AI, essentially, with that example. If you could model the cognitive processes of the authors of secretly malicious code, then it would be much more obvious that some of their (instrumental) goals didn't correspond to the ones that you wanted them to be achieving. The only way an AI could deceive us would be to deceive itself, and I'm not confident that an AI could do that.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-27T22:48:32.267Z · LW(p) · GW(p)

I'm not confident that an AI could do that.

That's not the same as “I'm confident that an AI couldn't do that”, is it?

Replies from: chaosmosis
comment by chaosmosis · 2012-08-28T03:28:11.264Z · LW(p) · GW(p)

At the time, it wasn't the same.

Since then, I've thought more, and gained a lot of confidence on this issue. Firstly, any decision made by the AI to deceive us about its thought processes would logically precede anything that would actually deceive us, so we don't have to deal with the AI hiding its previous decision to be devious. Secondly, if the AI is divvying its own brain up into certain sections, some of which are filled with false beliefs and some which are filled with true ones, it seems like the AI would render itself impotent on a level proportionate to the extent that it filled itself with false beliefs. Thirdly, I don't think a mechanism which allowed for total self deception would even be compatible with rationality.

comment by V_V · 2012-08-27T23:06:53.657Z · LW(p) · GW(p)

Even if the AI can modify its code, it can't really do anything that wasn't entailed by its original programming.

(Ok, it could have a security vulnerability that allowed the execution of externally-injected malicious code, but that is a general issue of all computer systems with an external digital connection)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-28T02:57:03.898Z · LW(p) · GW(p)

Even if the AI can modify its code, it can't really do anything that wasn't entailed by its original programming.

The hard part is predicting everything that was entailed by its initial programing and making sure it's all safe.

Replies from: V_V
comment by V_V · 2012-08-28T10:39:47.495Z · LW(p) · GW(p)

That's right, history of engineering tells us that "provably safe" and "provably secure" systems fail in unanticipated ways.

comment by DanielLC · 2012-08-26T03:27:25.166Z · LW(p) · GW(p)

If it's a self-modifying AI, the main problem is that it keeps changing. You might find the memory position that corresponds to, say, expected number of paperclips. When you look at it next week wondering how many paperclips there are, it's changed to staples, and you have no good way of knowing.

If it's not a self-modifying AI, then I suspect it would be pretty easy. If it used Solomonoff induction, it would be trivial. If not, you are likely to run into problems with stuff that only approximates Bayesian stuff. For example, if you let it develop its own hanging nodes, you'd have a hard time figuring out what they correspond to. They might not even correspond to something you could feasibly understand. If there's a big enough structure of them, it might even change.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-27T18:06:59.731Z · LW(p) · GW(p)

This is a reason it would be extremely difficult. Yet I feel the remaining existential risk should outweigh that.

It seems to me reasonably likely that our first version of FAI would go wrong. Human values are extremely difficult to understand because they're spaghetti mush, and they often contradict each other and interact in bizarre ways. Reconciling that in a self consistent and logical fashion would be very difficult to do. Coding a program to do that would be even harder. We don't really seem to have made any real progress on FAI thus far, so I think this level of skepticism is warranted.

I'm proposing multiple alternative tracks to safer AI, which should probably be used in conjunction with the best FAI we can manage. Some of these tracks are expensive, and difficult, but others seem simpler. The interactions between the different tracks produces a sort of safety net where the successes of one check the failures of others, as I've had to show throughout this conversation again and again.

I'm willing to spend much more to keep the planet safe against a much lower level of existential risk than anyone else here, I think. That's the only reason I can think to explain why everyone keeps responding with objections that essentially boil down to "this would be difficult and expensive". But the entire idea of AI is expensive, as well as FAI, yet the costs are accepted easily in those cases. I don't know why we shouldn't just add another difficult project to our long list of difficult projects to tackle, given the stakes that we're dealing with.

Most people on this site seem only to consider AI as a project to be completed in the next fifty or so years. I see it more as the most difficult task that's ever been attempted in all humankind. I think it will take at least 200 hundred years, even factoring in the idea that new technologies I can't even imagine will be developed over that time. I think the most common perspective on the way we should approach AI is thus flawed, and rushed, compared to the stakes, which are millions of generations of human decendents. We're approaching a problem that effects millions of future generations, and trying to fix it in half a generation with as cheap a budget as we think we can justify, and that seems like a really bad idea (possibly the worst idea ever) to me.

comment by chaosmosis · 2012-08-25T19:14:17.631Z · LW(p) · GW(p)

Well, he demonstrated that it can sometimes get out. But my claim was that "getting out" isn't the scary part- the scary part is "reshaping the world." My brain can reshape the world just fine while remaining in my skull and only communicating with my body through slow chemical wires, and so giving me the goal of "keep your brain in your skull" doesn't materially reduce my ability or desire to reshape the world.

EY's experiment is wholly irrelevant to this claim. Either you're introducing irrelevant facts or morphing your position. I think you're doing this without realizing it, and I think it's probably due to motivated cognition (because morphing claims without noticing it correlates highly with motivated cognition in my experience). I really feel like we might have imposed a box-taboo on this site that is far too strong.

And so if you say "well, we'll make the AI not want to reshape the world," then the AI will be silent. If you say "we'll make the AI not want to reshape the world without the consent of the gatekeepers," then the gatekeepers might be tricked or make mistakes. If you say "we'll make the AI not want to reshape the world without the informed consent of the gatekeepers / in ways which disagree with the values of the gatekeepers," then you're just saying we should build a Friendly AI, which I agree with!

You keep misunderstanding what I'm saying over and over and over again and it's really frustrating and a big time sink. I'm going to need to end this conversation if it keeps happening because the utility of it is going down dramatically with each repetition.

I'm not proposing a system where the AI doesn't interact with the outside world. I'm proposing a system where the AI is only ever willing to use a few appendages to effect the outside world, as opposed to potentially dozens. This minimizes the degree of control that the AI has dramatically, which is a good thing.

This is not FAI either, it is an additional constraint that we should use when putting early FAIs into action. I'm not saying that we merge the AIs values to the values of the gatekeeper, I have no idea where you keep pulling that idea from.

It's possible that I'm misunderstanding you, but I don't know how that would be true specifically, because many of your objections just seem totally irrelevant to me and I can't understand what you're getting at. It seems more likely that you're just not used to the idea of this version of boxing so you just regurgitate generic arguments against boxing, or something. You're also coming up with more obscure arguments as we go farther into this conversation. I don't really know what's going on at your end, but I'm just annoyed at this point.

It's easy to write a safe AI that can only answer one question. How do you get from point A to point B using the road system? Ask Google Maps, and besides some joke answers, you'll get what you want. When people talk about AGI, though, they mean an AI that can write those safe AIs. If you ask it how to get from point A to point B using the road system, and it doesn't know that Google Maps exists, it'll invent a new Google Maps and then use it to answer that question. And so when we ask it to cure cancer, it'll invent medicine-related AIs until it gets back a satisfactory answer. The trouble is that the combination of individually safe AIs is not a safe AI. If we have a driverless car that works fine with human-checked directions, and direction-generating software that works fine for human drivers, plugging them together might result in a car trying to swim across the Atlantic Ocean. (Google has disabled the swimming answers, so Google Maps no longer provides them.) The more general point is that software is very bad at doing sanity checks that humans don't realize are hard, and if you write software that can do those sanity checks, it has to be a full AGI.

I don't even understand how this clashes with my position. I understand that smashing simple AIs together is a dumb idea, but I never proposed that ever. I'm proposing using this special system for early FAIs, and asking them very carefully some very specific questions, along with other questions, so that we can be safe. I don't want this AI to have any direct power, or even super accurate input information.

Yes, obviously, this type of AI is a more limited AI. That's the goal. Limiting our first attempt at FAI is a fantastic idea because existential risk is scary. We'll get less benefits from the FAI, and it will take longer to get those benefits. But it will be a good idea, because it seems really likely to me that we could mess up FAI without even knowing it.

A truism in software is that code is harder to read than write, and often the interesting AIs are the nth generation AIs- where you build an AI that builds an AI that builds an AI (and so on), and turns out that an AI thought all of the human-readability constraints were cruft (because the AI does really run faster and better without those restrictions).

Sure, it will be hard to read the AIs mind. I see no reason why we should just not even try though.

You say that the AI will build an AI that will build an AI. But then you immediately jump to assuming that this means the final AI would leap beyond human comprehension. AIs are not Gods, and we shouldn't treat them like ones. If we could pause the AI and read its coding, while slowing down its thought processes, and devoting lots of resources to the project (as we should do, no matter what) then reading its mind seems doable. We could also use earlier AIs to help us interpret the thoughts of later AIs, if necessary.

Reading its mind literally would guarantee that it couldn't trick us. Why would we not choose to pursue this, even if it sorta seems like it might be expensive?

Replies from: Eugine_Nier, Vaniver
comment by Eugine_Nier · 2012-08-26T21:00:26.369Z · LW(p) · GW(p)

I'm proposing a system where the AI is only ever willing to use a few appendages to effect the outside world, as opposed to potentially dozens.

The problem is that the AI could use its appendages to create and use tools that are more powerful than the appendages themselves.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-27T17:53:28.268Z · LW(p) · GW(p)

I've already addressed this, the AI would still be entirely dependent on its appendages and that's a major advantage. So long as we watch the appendages and act to check any actions by them that seem suspicious then the AI would remain weak. The AI isn't magic, and it's not even beyond the scope of human cunning if we limit its input data. Again, also keep in mind also that we'd watch the communications between the appendages and the AI as well, so we'd know immediately if it was trying to get them to make it any tools. The Gatekeepers wouldn't exist in a vacuum, they would be watched over and countered by us.

comment by Vaniver · 2012-08-25T22:51:29.831Z · LW(p) · GW(p)

I'm going to need to end this conversation if it keeps happening because the utility of it is going down dramatically with each repetition.

I think this conversation has run its course as well, though I intend to pursue a side issue in PMs.

comment by [deleted] · 2012-08-16T22:47:22.794Z · LW(p) · GW(p)

The problem with therapy-- include self help and mind hacks-- is its amazing failure rate. People do it for years and come out of it and feel like they understand themselves better but they do not change. If it failed to produce both insights and change it would make sense, but it is almost always one without the other.

-- The Last Psychiatrist

Replies from: Chriswaterguy
comment by Chriswaterguy · 2012-08-21T13:31:59.964Z · LW(p) · GW(p)

Is it our bias towards optimism? (And is that bias there because pessimists take fewer risks, and therefore don't succeed at much and therefore get eliminated from the gene pool?)

I heard (on a PRI podcast, I think) a brain scientist give an interpretation of the brain as a collection of agents, with consciousness as an interpreting layer that invents reasons for our actions after we've actually done them. There's evidence of this post-fact interpretation - and while I suspect this is only part of the story, it does give a hint that our conscious mind is limited in its ability to actually change our behavior.)

Still, people do sometimes give up alcohol and other drugs, and keep new resolutions. I've stuck to my daily exercise for 22 days straight. These feel like conscious decisions (though I may be fooling myself) but where my conscious will is battling different intentions, from different parts of my mind.

Apologies if that's rambling or nonsensical. I'm a bit tired (because every day I consciously decide to sleep early and every day I fail to do it) and I haven't done my 23rd day's exercise yet. Which I'll do now.

comment by aausch · 2012-08-05T19:52:35.195Z · LW(p) · GW(p)

Did you teach him wisdom as well as valor, Ned? she wondered. Did you teach him how to kneel? The graveyards of the Seven Kingdoms were full of brave men who had never learned that lesson

-- Catelyn Stark, A Game of Thrones, George R. R. Martin

comment by Alicorn · 2012-08-22T05:09:26.846Z · LW(p) · GW(p)

Some critics of education have said that examinations are unrealistic; that nobody on the job would ever be evaluated without knowing when the evaluation would be conducted and what would be on the evaluation.

Sure. When Rudy Giuliani took office as mayor of New York, someone told him "On September 11, 2001, terrorists will fly airplanes into the World Trade Center, and you will be judged on how effectively you cope."

...

When you skid on an icy road, nobody will listen when you complain it's unfair because you weren't warned in advance, had no experience with winter driving and had never been taught how to cope with a skid.

-- Steven Dutch

comment by D_Malik · 2012-08-04T04:15:04.637Z · LW(p) · GW(p)

Only the ideas that we actually live are of any value.

-- Hermann Hesse, Demian

comment by lukeprog · 2012-08-04T10:28:30.802Z · LW(p) · GW(p)

Reductionism is the most natural thing in the world to grasp. It's simply the belief that "a whole can be understood completely if you understand its parts, and the nature of their sum." No one in her left brain could reject reductionism.

Douglas Hofstadter

Replies from: army1987, Mitchell_Porter, ChristianKl
comment by A1987dM (army1987) · 2012-08-06T07:56:27.678Z · LW(p) · GW(p)

ADBOC. Literally, that's true (but tautologous), but it suggests that understanding the nature of their sum is simple, which it isn't. Knowing the Standard Model gives hardly any insight into sociology, even though societies are made of elementary particles.

comment by Mitchell_Porter · 2012-08-04T10:32:56.528Z · LW(p) · GW(p)

That quote is supposed to be paired with another quote about holism.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-05T23:55:00.303Z · LW(p) · GW(p)

Q: What did the strange loop say to the cow? A: MU!

Replies from: Alejandro1
comment by Alejandro1 · 2012-08-06T03:54:30.569Z · LW(p) · GW(p)

-- Knock knock.

-- Who is it?

-- Interrupting koan.

-- Interrupting ko-

-- MU!!!

comment by ChristianKl · 2012-08-11T11:21:09.164Z · LW(p) · GW(p)

The interesting thing is that Hofstadter doesn't seem to argue here that reductionism is true but that it's a powerful meme that easily gets into people brain.

comment by [deleted] · 2012-08-25T10:55:45.052Z · LW(p) · GW(p)

To understand our civilisation, one must appreciate that the extended order resulted not from human design or intention but spontaneously: it arose from unintentionally conforming to certain traditional and largely moral practices, many of which men tend to dislike, whose significance they usually fail to understand, whose validity they cannot prove, and which have nonetheless fairly rapidly spread by means of an evolutionary selection — the comparative increase of population and wealth — of those groups that happened to follow them. The unwitting, reluctant, even painful adoption of these practices kept these groups together, increased their access to valuable information of all sorts, and enabled them to be 'fruitful, and multiply, and replenish the earth, and subdue it' (Genesis 1:28). This process is perhaps the least appreciated facet of human evolution.

-- Friedrich Hayek, The Fatal Conceit : The Errors of Socialism (1988), p. 6

comment by Alicorn · 2012-08-09T00:26:50.409Z · LW(p) · GW(p)

It's not the end of the world. Well. I mean, yes, literally it is the end of the world, but moping doesn't help!

-- A Softer World

comment by MichaelHoward · 2012-08-03T11:41:24.798Z · LW(p) · GW(p)

Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not necessarily to add to the quote, but you should be able to find it if challenged.

wikiquote.org seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-03T12:05:45.736Z · LW(p) · GW(p)

Yes, and also a point to check whether the quote has been posted to LW already.

comment by Oscar_Cunningham · 2012-08-02T21:27:58.101Z · LW(p) · GW(p)

By keenly confronting the enigmas that surround us, and by considering and analyzing the observations that I have made, I ended up in the domain of mathematics.

M. C. Escher

comment by A1987dM (army1987) · 2012-08-08T19:58:35.214Z · LW(p) · GW(p)

When a philosophy thus relinquishes its anchor in reality, it risks drifting arbitrarily far from sanity.

Gary Drescher, Good and Real

comment by Stabilizer · 2012-08-03T04:07:02.619Z · LW(p) · GW(p)

But a curiosity of my type remains after all the most agreeable of all vices --- sorry, I meant to say: the love of truth has its reward in heaven and even on earth.

-Friedrich Nietzsche

comment by roland · 2012-08-03T08:06:53.118Z · LW(p) · GW(p)

Explanations are all based on what makes it into our consciousness, but actions and the feelings happen before we are consciously aware of them—and most of them are the results of nonconscious processes, which will never make it into the explanations. The reality is, listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time. --Michael Gazzaniga

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-03T09:30:14.488Z · LW(p) · GW(p)

Does that apply to that explanation as well?

Does it apply to explanations made in advance of the actions? For example, this evening (it is presently morning) I intend buying groceries on my way home from work, because there's stuff I need and this is a convenient opportunity to get it. When I do it, that will be the explanation.

In the quoted article, the explanation he presents as a paradigmatic example of his general thesis is the reflex of jumping away from rustles in the grass. He presents an evolutionary just-so story to explain it, but one which fails to explain why I do not jump away from rustles in the grass, although surely I have much the same evolutionary background as he. I am more likely to peer closer to see what small creature is scurrying around in there. But then, I have never lived anywhere that snakes are a danger. He has.

And yet this, and split-brain experiments, are the examples he cites to say that "often", we shouldn't listen to anyone's explanations of their behaviour.

If you were to have asked me why I had jumped, I would have replied that I thought I’d seen a snake. The reality, however, is that I jumped way before I was conscious of the snake.

I smell crypto-dualism. "I thought there was a snake" seems to me a perfectly good description of the event, even given that I jumped way before I was conscious of the snake. (He has "I thought I'd seen a snake", but this is a fictional example, and I can make up fiction as well as he can.)

The article references his book. Anyone read it? The excerpts I've skimmed on Amazon just consist of more evidence that we are brains: the Libet experiments, the perceived simultaneity of perceptions whose neural signals aren't, TMS experiments, and so on. There are some digressions into emergence, chaos, and quantum randomness. Then -- this is his innovation, highlighted in the publisher's blurb -- he sees responsibility as arising from social interaction. Maybe I'm missing something in the full text, but is he saying that someone alone really is just an automaton, and only in company can one really be a person?

I believe there are people like that, who only feel alive in company and feel diminished when alone. Is this is just an example of someone mistaking their idiosyncratic mental constitution for everybody's?

Replies from: MixedNuts, roland, Cyan
comment by MixedNuts · 2012-08-10T08:07:42.669Z · LW(p) · GW(p)

Did you in fact buy the groceries?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-10T08:55:54.131Z · LW(p) · GW(p)

I did.

There are many circumstances that might have prevented it; but none of them happened. There are many others that might have obstructed it; but I would have changed my actions to achieve the goal.

Goals of such a simple sort are almost invariably achieved.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-12T10:39:48.805Z · LW(p) · GW(p)

Three upvotes for demonstrating the basic competence to buy groceries?

comment by roland · 2012-08-03T20:55:53.538Z · LW(p) · GW(p)

There is a famous study that digs a bit deeper and convincingly demonstrates it: Telling more than we can know: Verbal reports on mental processes.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-06T07:46:44.543Z · LW(p) · GW(p)

From the abstract:

This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them.

It seems to me that "cognitive processes" could be replaced by "physical surroundings", and the resulting statement would still be true. I am not sure how significant these findings are. We have imperfect knowledge of ourselves, but we have imperfect knowledge of everything.

comment by Cyan · 2012-08-03T14:31:12.894Z · LW(p) · GW(p)

..listening to people’s explanations of their actions is... often a waste of time.

Does that apply to that explanation as well?

Obviously not, since Gazzaniga is not explaining his own actions.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-03T14:35:22.705Z · LW(p) · GW(p)

Obviously not, since Gazzaniga is not explaining his own actions.

He is, among other things, explaining some of his own actions: his actions of explaining his actions.

Replies from: Cyan
comment by Cyan · 2012-08-03T18:09:51.843Z · LW(p) · GW(p)

You seem to have failed to notice the key point. Here's a slight rephrasing of it: "explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes."

You ask, does Gazzaniga's explanation apply to explanations made in advance of the actions? The key point I've highlighted answers that question. In particular, your explanation of the actions you plan to take are (well, seem to me to be) the result of conscious processes. You consciously apprehended that you need groceries and consciously formulated a plan to fulfill that need.

It seems to me that in common usage, when a person says "I thought there was a snake" they mean something closer to, "I thought I consciously apprehended the presence of a snake," than, "some low-level perceptual processing pattern-matched 'snake' and sent motor signals for retreating before I had a chance to consider the matter consciously."

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-06T07:40:40.634Z · LW(p) · GW(p)

"explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes."

Yes, he says that. And then he says:

listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time.

thus extending the anecdote of snakes in the grass to a parable that includes politicans' speeches.

It seems to me that in common usage, when a person says "I thought there was a snake" they mean something closer to, "I thought I consciously apprehended the presence of a snake," than, "some low-level perceptual processing pattern-matched 'snake' and sent motor signals for retreating before I had a chance to consider the matter consciously."

Or perhaps they mean "I heard a sound that might be a snake". As long as we're just making up scenarios, we can slant them to favour any view of consciousness we want. This doesn't even rise to the level of anecdote.

comment by MichaelGR · 2012-08-09T20:06:20.218Z · LW(p) · GW(p)

The world is full of obvious things which nobody by any chance ever observes…

— Arthur Conan Doyle, “The Hound of the Baskervilles”

comment by Eugine_Nier · 2012-08-02T21:04:20.908Z · LW(p) · GW(p)

[M]uch mistaken thinking about society could be eliminated by the most straightforward application of the pigeonhole principle: you can't fit more pigeons into your pigeon coop than you have holes to put them in. Even if you were telepathic, you could not learn all of what is going on in everybody's head because there is no room to fit all that information in yours. If I could completely scan 1,000 brains and had some machine to copy the contents of those into mine, I could only learn at most about a thousandth of the information stored in those brains, and then only at the cost of forgetting all else I had known. That's a theoretical optimum; any such real-world transfer process, such as reading and writing an e-mail or a book, or tutoring, or using or influencing a market price, will pick up only a small fraction of even the theoretically acquirable knowledge or preferences in the mind(s) at the other end of said process, or if you prefer of the information stored by those brain(s). Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact. Furthermore, a society with more such computational and mental diversity is more productive, because specialized algorithms, mental processes, and skills are generally far more productive than generalized ones. As Friedrich Hayek pointed out, our mutual inability to understand a very high fraction of what others know has profound implications for our economic and political institutions.

-- Nick Szabo

Replies from: bramflakes, mfb
comment by bramflakes · 2012-08-02T22:45:16.204Z · LW(p) · GW(p)

What about compression?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-02T23:42:23.969Z · LW(p) · GW(p)

Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point.

On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.

Replies from: VKS, BillyOblivion
comment by VKS · 2012-08-02T23:51:41.900Z · LW(p) · GW(p)

You'll probably have more success losslessly compressing two brains than losslessly compressing one.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-03T07:29:00.834Z · LW(p) · GW(p)

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)

Replies from: VKS
comment by VKS · 2012-08-03T09:46:47.911Z · LW(p) · GW(p)

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...

But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.

Which I can't.

For now.

Replies from: Richard_Kennaway, maia
comment by Richard_Kennaway · 2012-08-03T12:26:11.952Z · LW(p) · GW(p)

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.

How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?

Replies from: VKS
comment by VKS · 2012-08-03T22:27:05.919Z · LW(p) · GW(p)

I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.

comment by maia · 2012-08-03T19:41:58.085Z · LW(p) · GW(p)

a lossless copy of itself and itself two words ago

But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Replies from: VKS
comment by VKS · 2012-08-03T22:15:17.272Z · LW(p) · GW(p)

Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".

Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.

Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.

So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-04T20:17:57.102Z · LW(p) · GW(p)

It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

See the pigeon-hole argument in the original quote.

comment by BillyOblivion · 2012-08-09T10:45:44.063Z · LW(p) · GW(p)

Lossless compression is simply overlaying redundant patterns.

For example you could easily compress 1000 straight male brains by overlaying all the bits concerned with breasts, motorcycles and guns. Well, breasts anyway, some males don't like motorcycles.

comment by mfb · 2012-08-04T22:10:36.923Z · LW(p) · GW(p)

If you can scan it, maybe you can simulate it? And if you can simulate one, wait some years and you can simulate 1000, probably connected in some way to form a single "thinking system".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-05T18:07:26.717Z · LW(p) · GW(p)

But not on your own brain.

comment by gwern · 2012-08-11T21:42:43.947Z · LW(p) · GW(p)

Anything worth doing is worth doing badly.

--Herbert Simon (quoted by Pat Langley)

Replies from: army1987, Vaniver, arundelo
comment by A1987dM (army1987) · 2012-08-12T13:57:19.395Z · LW(p) · GW(p)

Including artificial intelligence? ;-)

comment by Vaniver · 2012-08-12T00:13:53.518Z · LW(p) · GW(p)

The Chesterton version looks like it was designed to poke the older (and in my opinion better) advice from Lord Chesterfield:

Whatever is worth doing at all, is worth doing well.

Or, rephrased as Simon did:

Anything worth doing is worth doing well.

I strongly recommend his letters to his son. They contain quite a bit of great advice- as well as politics and health and so on. As it was private advice given to an heir, most of it is fully sound.

(In fact, it's been a while. I probably ought to find my copy and give it another read.)

Replies from: gwern, arundelo
comment by gwern · 2012-08-12T00:17:10.839Z · LW(p) · GW(p)

Yeah, they're on my reading list. My dad used to say that a lot, but I always said the truer version was 'Anything not worth doing is not worth doing well', since he was usually using it about worthless yardwork...

comment by arundelo · 2012-08-12T01:14:19.985Z · LW(p) · GW(p)

Ah, I was gonna mention this. Didn't know it was from Chesterfield.

I think there'd be more musicians (a good thing IMO) if more people took Chesterton's advice.

comment by arundelo · 2012-08-11T21:51:17.686Z · LW(p) · GW(p)

A favorite of mine, but according to Wikiquote G.K. Chesterton said it first, in chapter 14 of What's Wrong With The World:

If a thing is worth doing, it is worth doing badly.

Replies from: gwern
comment by gwern · 2012-08-11T22:56:06.765Z · LW(p) · GW(p)

I like Simon's version better: it flows without the awkward pause for the comma.

Replies from: arundelo
comment by arundelo · 2012-08-11T23:28:28.674Z · LW(p) · GW(p)

Yep, it seems that often epigrams are made more epigrammatic by the open-source process of people misquoting them. I went looking up what I thought was another example of this, but Wiktionary calls it "[l]ikely traditional" (though the only other citation is roughly contemporary with Maslow).

Replies from: gwern
comment by gwern · 2012-08-11T23:36:10.979Z · LW(p) · GW(p)

Memetics in action - survival of the most epigrammatic!

comment by A1987dM (army1987) · 2012-08-24T22:30:48.349Z · LW(p) · GW(p)

Niels Bohr's maxim that the opposite of a profound truth is another profound truth [is a] profound truth [from which] the profound truth follows that the opposite of a profound truth is not a profound truth at all.

-- The narrator in On Self-Delusion and Bounded Rationality, by Scott Aaronson

Replies from: Eliezer_Yudkowsky, RomanDavis
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-25T00:04:05.363Z · LW(p) · GW(p)

I would remark that truth is conserved, but profundity isn't. If you have two meaningful statements - that is, two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true. On the other hand, things that invoke deep-sounding words can often be negated, and sound equally profound at the end of it.

In other words, Bohr's maxim seems so blatantly awful that I am mostly minded to chalk it up as another case of, "I wish famous quantum physicists knew even a little bit about epistemology-with-math".

Replies from: TheOtherDave, army1987, None, shminux
comment by TheOtherDave · 2012-08-25T03:35:36.537Z · LW(p) · GW(p)

I don't really know what "profound" means here, but I usually take Bohr's maxim as a way of pointing out that when I encounter two statements, both of which seem true (e.g., they seem to support verified predictions about observations), which seem like opposites of one another, I have discovered a fault line in my thinking... either a case where I'm switching back and forth between two different and incompatible techniques for mapping English-language statements to predictions about observations, or a case for which my understanding of what it means for statements to be opposites is inadequate, or something else along those lines.

Mapping epistemological fault lines may not be profound, but I find it a useful thing to attend to. At the very least, I find it useful to be very careful about reasoning casually in proximity to them.

comment by A1987dM (army1987) · 2012-08-25T11:41:02.302Z · LW(p) · GW(p)

I seem to recall E.T. Jaynes pointing out some obscure passages by Bohr which (according to him) showed that he wasn't that clueless about epistemology, but only about which kind of language to use to talk about it, so that everyone else misunderstood him. (I'll post the ref if I find it. EDIT: here it is¹.)

For example, if this maxim actually means what TheOtherDave says it means, then it is a very good thought expressed in a very bad way.


  1. Disclaimer: I think the disproof of Bell's theorem in the linked article is wrong.
comment by [deleted] · 2012-08-25T01:15:19.393Z · LW(p) · GW(p)

two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true.

Hmm, why is that? This seems incontrovertible, but I can't think of an explanation, or even a hypothesis.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-25T05:16:31.073Z · LW(p) · GW(p)

Because they have non-overlapping truth conditions. Either reality is inside one set of possible worlds, inside the other set, or in neither set.

comment by Shmi (shminux) · 2012-08-25T00:17:36.534Z · LW(p) · GW(p)

On the other hand, things that invoke deep-sounding words can often be negated, and sound equally profound at the end of it.

Let's try it on itself... What's the negative of "often"? "Sometimes"?

On the other hand, things that invoke deep-sounding words can sometimes be negated, and sound equally profound at the end of it.

Yep, still sounds equally profound. Probably not the type of self-consistency you were striving for, though.

comment by RomanDavis · 2012-09-02T17:44:32.073Z · LW(p) · GW(p)

Reminds me of this.

comment by Nisan · 2012-08-03T07:25:20.814Z · LW(p) · GW(p)

"So now I’m pondering the eternal question of whether the ends justify the means."

"Hmm ... can be either way, depending on the circumstances."

"Precisely. A mathematician would say that stated generally, the problem lacks a solution. Therefore, instead of a clear directive the One in His infinite wisdom had decided to supply us with conscience, which is a rather finicky and unreliable device."

— Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov

comment by Eugine_Nier · 2012-08-02T21:05:32.177Z · LW(p) · GW(p)

Not only should you disagree with others, but you should disagree with yourself. Totalitarian thought asks us to consider, much less accept, only one hypothesis at a time. By contrast quantum thought, as I call it -- although it already has a traditional name less recognizable to the modern ear, scholastic thought -- demands that we simultaneoulsy consider often mutually contradictory possibilities. Thinking about and presenting only one side's arguments gives one's thought and prose a false patina of consistency: a fallacy of thought and communications similar to false precision, but much more common and imporant. Like false precision, it can be a mental mistake or a misleading rhetorical habit. In quantum reality, by contrast, I can be both for and against a proposition because I am entertaining at least two significantly possible but inconsistent hypotheses, or because I favor some parts of a set of ideas and not others. If you are unable or unwilling to think in such a quantum or scholastic manner, it is much less likely that your thoughts are worthy of others' consideration.

-- Nick Szabo

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-08-04T13:53:43.778Z · LW(p) · GW(p)

the overuse of "quantum" hurt my eyes. :(

comment by [deleted] · 2012-08-15T13:37:40.943Z · LW(p) · GW(p)

“If a man take no thought about what is distant, he will find sorrow near at hand.”

--Confucius

comment by lukeprog · 2012-08-12T19:15:37.484Z · LW(p) · GW(p)

In matters of science, the authority of thousands is not worth the humble reasoning of one single person.

Galileo

Replies from: army1987, wedrifid
comment by A1987dM (army1987) · 2012-08-12T20:38:20.802Z · LW(p) · GW(p)

OTOH, thousands would be less likely to all make the same mistake than one single person -- were it not for information cascades.

comment by wedrifid · 2012-08-13T01:05:14.742Z · LW(p) · GW(p)

In matters of science, the authority of thousands is not worth the humble reasoning of one single person.

Almost always false.

Replies from: OrphanWilde, ronny-fernandez
comment by OrphanWilde · 2012-08-13T15:34:29.413Z · LW(p) · GW(p)

If the basis of the position of the thousands -is- their authority, then the reason of one wins. If the basis of their position is reason, as opposed to authority, then you don't arrive at that quote.

comment by Ronny Fernandez (ronny-fernandez) · 2012-08-13T14:42:09.673Z · LW(p) · GW(p)

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

Replies from: faul_sname, wedrifid, Bruno_Coelho
comment by faul_sname · 2012-08-13T22:31:39.736Z · LW(p) · GW(p)

I wouldn't, though I would trust a thousand scientists over a billion sages.

comment by wedrifid · 2012-08-13T16:03:41.805Z · LW(p) · GW(p)

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

It would depend on the subject. Do we control for time period and the relative background knowledge of their culture in general?

comment by Bruno_Coelho · 2012-08-24T18:11:48.342Z · LW(p) · GW(p)

The majority is most part of time wrong. Or you search in data for patterns, or you put credences in some autor or group. People keep saying math things without basal training all the time -- here too.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-17T07:09:06.100Z · LW(p) · GW(p)

"Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall."

-- In Flight Gaiden: Playing with Tropes

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

Replies from: jslocum, VKS, Bill_McGrath, Raemon, nagolinc, MichaelHoward, army1987, army1987, arundelo, JenniferRM, Epiphany
comment by jslocum · 2012-08-17T15:11:43.130Z · LW(p) · GW(p)

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?

Replies from: Mitchell_Porter, None
comment by Mitchell_Porter · 2012-08-17T23:49:23.482Z · LW(p) · GW(p)

Absolutely; I hope he doesn't think that writing a story about X increases the measure of X. But then why else would he introduce these "impossibilities"?

Replies from: Desrtopa
comment by Desrtopa · 2012-08-17T23:56:12.581Z · LW(p) · GW(p)

Because it's funny?

comment by [deleted] · 2012-08-17T15:54:10.097Z · LW(p) · GW(p)

It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-17T21:41:24.253Z · LW(p) · GW(p)

Nnnot really. The Time-Turner, certainly, but that doesn't make the story uninstantiable. Making a logical impossibility a basic plot premise... sounds like quite an interesting challenge, but that would be a different story.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-08-18T16:30:40.414Z · LW(p) · GW(p)

A spell that lets you get a number of objects that is an integer such that it's larger than some other integer but smaller than it's successor, used to hide something.

Replies from: VincentYu
comment by VincentYu · 2012-08-18T17:04:57.557Z · LW(p) · GW(p)

This idea (the integer, not the spell) is the premise of the short story The Secret Number by Igor Teper.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-08-18T19:54:30.216Z · LW(p) · GW(p)

And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I'm surprised I couldn't find a TVtropes page on it.

comment by VKS · 2012-08-21T21:06:29.940Z · LW(p) · GW(p)

impossibilities such as ... tiling a corridor in pentagons

Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...

This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.

And this isn't even the sort of thing you can write fanfics about. :¬(

comment by Bill_McGrath · 2012-08-19T13:12:03.330Z · LW(p) · GW(p)

However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.

Could you explain why you did that?

As regards the pentagons, I kinda assumed the pentagons weren't regular, equiangular pentagons - you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.

Replies from: Benquo, army1987
comment by Benquo · 2012-08-20T16:45:46.867Z · LW(p) · GW(p)

Could you explain why you did that?

Because he doesn't want to create Azkaban.

Also, possibly because there's not a happy ending.

Replies from: Bill_McGrath
comment by Bill_McGrath · 2012-08-21T10:05:59.779Z · LW(p) · GW(p)

But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical "Azkaban from a universe without EY's logical inconsistencies" exists, no matter whether he writes about it or not. I don't see how writing about it could affect how real/not-real it is.

So by my understanding of how Eliezer explained it, he's not creating Azkaban, in the sense that writing about it causes it to exist, he's describing it. (This is not to say that he's not creating the fiction, but the way I see it create is being used in two different ways.) Unless I'm missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.

comment by A1987dM (army1987) · 2012-08-19T22:44:24.428Z · LW(p) · GW(p)

Could you explain why you did that?

I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won't bother having children because there's some other universe where they exist anyway.

I think that violates the crap out of Egan's Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn't fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that's not his true rejection to the idea of having kids (30%), or I am missing something (65%).

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-19T22:45:32.737Z · LW(p) · GW(p)

That is not the reason or even a reason why I'm not having kids at the moment. And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).

Replies from: Mitchell_Porter, None, army1987, RomanDavis
comment by Mitchell_Porter · 2012-08-20T05:05:33.369Z · LW(p) · GW(p)

I don't particularly want to discourage other people from having children

I feel that I should. It's a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out.

But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.

comment by [deleted] · 2012-08-20T12:03:29.766Z · LW(p) · GW(p)

And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).

That sounds sufficiently ominous that I'm not quite sure I want kids any more.

Replies from: Eliezer_Yudkowsky, hankx7787
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-20T20:04:59.169Z · LW(p) · GW(p)

Shouldn't you be taking into account that I don't want to discourage other people from having kids?

Replies from: philh, DaFranker, Eugine_Nier
comment by philh · 2012-08-20T20:39:21.799Z · LW(p) · GW(p)

That might just be because you eat babies.

comment by DaFranker · 2012-08-20T20:53:15.613Z · LW(p) · GW(p)

Unfortunately, that seems to be a malleable argument. Which way your stating that (you don't want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be.

Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I've already been burned pretty hard by blindly reaching for a role-model I was too fond of.

comment by Eugine_Nier · 2012-08-20T20:20:18.129Z · LW(p) · GW(p)

But you're afraid that if you state your reason, it will discourage others from having kids.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-20T23:13:04.303Z · LW(p) · GW(p)

All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don't actually apply to them.

comment by hankx7787 · 2012-08-22T07:22:13.626Z · LW(p) · GW(p)

Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren't themselves FAI researchers to be having kids, so he wouldn't want to discourage that. (although I don't see how just explaining this would discourage anybody from having kids who you would otherwise want to.)

comment by A1987dM (army1987) · 2012-08-19T23:01:45.540Z · LW(p) · GW(p)

(I must have misremembered. Sorry)

Replies from: Eliezer_Yudkowsky, Oscar_Cunningham
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-19T23:03:52.861Z · LW(p) · GW(p)

OK, no prob!

(I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don't see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I'm not having kids right now.)

Replies from: None, army1987, chaosmosis
comment by [deleted] · 2012-08-20T19:34:37.690Z · LW(p) · GW(p)

I do care about everything that exists.

I think I care about almost nothing that exists, and that seems like too big a disagreement. It's fair to assume that I'm the one being irrational, so can you explain to me why one should care about everything?

Replies from: Eliezer_Yudkowsky, army1987
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-21T06:18:03.502Z · LW(p) · GW(p)

All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs 'don't care', like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I'm pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don't expect to sprout wings and fly away. Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I'm not sure that it is true, although it does seem very plausible.

Replies from: None, MichaelHoward, army1987, Strange7
comment by [deleted] · 2012-08-21T06:30:12.485Z · LW(p) · GW(p)

Don’t forget.
Always, somewhere,
somebody cares about you.
As long as you simulate him,
you are not valueless.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-08-21T18:52:52.566Z · LW(p) · GW(p)

The moral value of imaginary friends?

comment by MichaelHoward · 2012-08-21T20:21:18.947Z · LW(p) · GW(p)

I notice that I am meta-confused...

Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this;

Shouldn't we strongly expect this weighting, by Solomonoff induction?

Replies from: army1987, None
comment by A1987dM (army1987) · 2012-08-21T22:09:44.963Z · LW(p) · GW(p)

Probability is not obviously amount of existence.

comment by [deleted] · 2012-08-21T22:56:45.436Z · LW(p) · GW(p)

Allow me to paraphrase him with some of my own thoughts.

Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like "all possible worlds", or computable worlds anyway, and they're each equipped with a little number that discounts them by their complexity. So maybe that's like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that's a really weird mental image, and I don't want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn't exist very much? I get a mental image of fog or a ghost or something. That's silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.

So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it's a very nice and useful little principle. Or at least its useful in this world. But maybe I'm thinking that in multiple worlds, some of which I'm about to find myself having negative six octarine tentacles. So occam's razor is useful in ... some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn't seem like a very big leap to go from "Occam's razor is useful" to "Occam's razor is useful because when using it, my beliefs reflect and exploit the structure of reality", or to "Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction".

Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don't exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That's an almost satisfying explanation, it would sweep away a lot of my confused questions, but It's not quite obviously right to me, and that's the standard I hold myself to. One thing that feels icky about the idea of "degree of existence" actually being "degree of decision importance" is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don't care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can't even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it's impossible for them to exist?

Or maybe "a logically impossible event" isn't meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying "These two proofs have all the same statements, just in different order, so they depend on the same facts", or "these two proofs can be compressed by extracting a common subproof", or "using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3" in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.

And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn't seem like it's going to get me closer to the truth either. Aiiiiiiieeee.

comment by A1987dM (army1987) · 2012-08-21T09:36:46.979Z · LW(p) · GW(p)

our universe is so suspiciously simple and regular relative to all imaginable universes

(Assuming you mean “all imaginable universes with self-aware observers in them”.)

Not completely sure about that, even Conway's Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)

comment by Strange7 · 2012-08-22T00:18:26.439Z · LW(p) · GW(p)

On most of the existing things in the modern universe, it outputs 'don't care', like for dirt.

What do you mean, you don't care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you're indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.

Replies from: Eliezer_Yudkowsky, ArisKatsaris
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-22T01:47:09.294Z · LW(p) · GW(p)

I care about the future consequences of dirt, but not the dirt itself.

(For the love of Belldandy, you people...)

comment by ArisKatsaris · 2012-08-22T00:23:26.914Z · LW(p) · GW(p)

What do you mean, you don't care about dirt?

He means that he doesn't care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).

Replies from: Strange7
comment by Strange7 · 2012-08-22T00:32:30.770Z · LW(p) · GW(p)

Yes, and I'm arguing that it has instrumental value anyway. A well-thought-out utility function should reflect that sort of thing.

Replies from: earthwormchuck163
comment by earthwormchuck163 · 2012-08-22T03:35:16.715Z · LW(p) · GW(p)

Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren't supposed to be reflected in your utility function. That is a type error plain and simple.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-22T09:58:11.240Z · LW(p) · GW(p)

For agents with bounded computational resources, I'm not sure that's the case. I don't terminally value money at all, but I pretend I do as a computational approximation because it'd be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I'm consider gaining or losing money in exchange for something else.

Replies from: earthwormchuck163
comment by earthwormchuck163 · 2012-08-22T21:23:20.385Z · LW(p) · GW(p)

I thought that was what I just said...

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-22T22:39:04.712Z · LW(p) · GW(p)

An approximation is not necessarily a type error.

Replies from: earthwormchuck163, None
comment by earthwormchuck163 · 2012-08-23T01:05:56.278Z · LW(p) · GW(p)

No, but mistaking your approximation for the thing you are approximating is.

comment by [deleted] · 2012-08-23T00:05:45.700Z · LW(p) · GW(p)

That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.

comment by A1987dM (army1987) · 2012-08-20T22:09:24.250Z · LW(p) · GW(p)

Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels -- the way once in a while people come up with new solutions to Einstein's field equations only to later find out they were just already-known solutions with an unusual coordinate system.)

Replies from: ArisKatsaris, None
comment by ArisKatsaris · 2012-08-21T22:18:59.075Z · LW(p) · GW(p)

Try tabooing exist

I've not yet found a good way to do that. Do you have one?

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-22T00:47:50.584Z · LW(p) · GW(p)

"Be in this universe"(1) vs "be mathematically possible" should cover most cases, though other times it might not quite match either of those and be much harder to explain.

  1. "This universe" being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-08-22T01:09:12.771Z · LW(p) · GW(p)

Defining 'existence' by using 'interaction' (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.

As for "mathematical possibility", that's generally not what most people mean by existence -- unless Tegmark IV is proven or assumed to be true, I don't think we can therefore taboo it in this manner...

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-22T09:48:43.158Z · LW(p) · GW(p)

Defining 'existence' by using 'interaction' (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.

I'm not claiming they're ultimate definitions --after all any definition must be grounded in something else-- but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this, where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’.

As for "mathematical possibility", that's generally not what most people mean by existence -- unless Tegmark IV is proven or assumed to be true, I don't think we can therefore taboo it in this manner...

Sentences such as “there exist infinitely many prime numbers” don't sound that unusual to me.

comment by [deleted] · 2012-08-20T22:37:10.653Z · LW(p) · GW(p)

Try tabooing exist: you might find out that you actually disagree on fewer things than you expect.

That's way too complicated (and as for tabooing 'exist', I'll believe it when I see it). Here's what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don't care about that urine at all. Not one tiny little bit. Heck, I don't even care about that dog, much less all the other dogs, and the urine that is in them. That's a lot of things! And I don't care about any of it. I assume Eliezer doesn't care about the dog urine in that dog either. It would be weird if he did. But it's in the 'everything' bucket, so...I probably misunderstood him?

comment by A1987dM (army1987) · 2012-08-20T22:13:15.774Z · LW(p) · GW(p)

I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do.

So you're using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.

comment by chaosmosis · 2012-08-30T19:30:49.167Z · LW(p) · GW(p)

"I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do."

I was confused by this for a while, but couldn't express that in words until now.

First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don't even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn't really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake.

Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don't think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea.

I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we'd be just as motivated to do different things as they would. I don't think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept.

So the reasoning behind that sentence didn't really make sense to me. The amount of existence that you have, assuming that's even a thing, shouldn't determine your moral value.

Replies from: The_Duck
comment by The_Duck · 2012-08-30T19:51:30.599Z · LW(p) · GW(p)

I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: "Never speak more clearly than you think." [The internet seems to attribute this to one Jeremy Bernstein]

If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there's a world where we chose the opposite? Probably not: the different worlds seem to have different different "degrees of existence" in that we are more likely to find ourselves in some than in others. I'm not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the "degree of existence" of good worlds.

This suggests that the idea of a "degree of existence" might not be completely incoherent.

Replies from: chaosmosis
comment by chaosmosis · 2012-08-30T20:59:18.583Z · LW(p) · GW(p)

I suppose you can just attribute it to imprecision, but "I am not particularly certain ...how much they exist" implies that he's talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you're talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer's terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using "amount of existence" instead of "number of worlds" is unnecessarily confusing, at the least.

Also, I don't see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don't exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don't care about alternate versions of the universe unless I think it's likely that I'll fall into one through some sort of interdimensional portal (I don't).

Although, I'll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I'd like to receive benefits in return, but in other scenarios I don't really care at all.

comment by Oscar_Cunningham · 2012-08-19T23:48:11.279Z · LW(p) · GW(p)

Congratulations for having "I am missing something" at a high probability!

comment by RomanDavis · 2012-08-20T13:12:40.054Z · LW(p) · GW(p)

I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) "Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu," which sounded kinda wierd, because it applies to literally every human on earth, and that didn't seem to be where you were going.

Replies from: Tyrrell_McAllister, tut
comment by Tyrrell_McAllister · 2012-08-20T14:53:25.216Z · LW(p) · GW(p)

"Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu,"

He has said something like that, but always with the caveat that there be an exception for pre-singularity civilizations.

Replies from: RomanDavis
comment by RomanDavis · 2012-08-20T15:49:57.351Z · LW(p) · GW(p)

The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he's said it. It may have been cut for time. And I don't mean to suggest my memory is anything like perfect.

But: I strongly suspect that's still on the internet, on BHTV or somewhere else.

comment by tut · 2012-08-20T16:01:29.459Z · LW(p) · GW(p)

Why is that in ROT13? Are you trying to not spoil an underspecified episode of BHTV?

Replies from: RomanDavis
comment by RomanDavis · 2012-08-20T16:09:22.772Z · LW(p) · GW(p)

It's not something Eliezer wanted said publicly. I wasn't sure what to do, and for some reason I didn't want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn't worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can't always keep myself from doing it.

comment by Raemon · 2012-08-17T15:53:50.994Z · LW(p) · GW(p)

Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore's room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.

comment by nagolinc · 2012-08-18T03:14:58.985Z · LW(p) · GW(p)

The problem with using such logical impossibilities is you have to make sure they're really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space. So, sorry to break it to you, but it there's a multiverse your story is real in it.

comment by MichaelHoward · 2012-08-18T00:31:42.779Z · LW(p) · GW(p)

"She heard Harry sigh, and after that they walked in silence for a while, passing through an archway of some reddish metal like copper, into a corridor that was just like the one they'd left except that it was tiled in pentagons instead of squares."

"she was trying to count the number of things in the room for the third time and still not getting the same answer, even though her memory insisted that nothing had been added or removed"

I'm curious though, is there anything in there that would even count as this level of logically impossible? Can anyone remember one?

comment by A1987dM (army1987) · 2012-08-17T22:22:47.618Z · LW(p) · GW(p)

Anyway, I've decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark's multiverse fails Egan's Law IMO. See also: the last chapter of Good and Real.

comment by A1987dM (army1987) · 2012-08-17T22:13:52.350Z · LW(p) · GW(p)

Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn't notice the same point had been made before. OK, I'll never again reply to comments in “Top Comments” without reading already existing replies first -- if I remember not to.)

comment by arundelo · 2012-08-17T15:20:43.909Z · LW(p) · GW(p)

pentagrams


[...] into a corridor that was just like the one they'd left except that it was tiled in pentagons instead of squares.

Replies from: Wrongnesslessness
comment by Wrongnesslessness · 2012-08-17T16:30:47.117Z · LW(p) · GW(p)

And they aren't even regular pentagons! So, it's all real then...

comment by JenniferRM · 2012-08-31T01:17:32.560Z · LW(p) · GW(p)

Or at least... the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry and... hmm, I don't think non-standard arithmetic will get you "Exists.N (N != N)", but reading literally here, you didn't say they were the same as such, merely that the operations of "addition" or "subtraction" were not used on them.

Now I'm curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think... I'm not even sure if orbits are stable in hyperbolic 3-space... And there's definitely counting of gold in the first few chapters, but I didn't track arithmetic to see if prices and total made sense... Hmm. Evil :-P

comment by Epiphany · 2012-08-18T04:37:01.519Z · LW(p) · GW(p)

Increasing objects without adding:

Viruses are technically considered non-living, and if you happen to have a pet with a cold, there may well be more viruses in the room when you enter it the second time, even though nothing has left or entered the room. I know that's a triviality, but some part of my mind took this as a challenge.

More Ways:

Place 100 strings into a large vat of sugar solution. Come back to discover that 100 rock candies have formed. Want to argue that the number of rock candies will equal the number of strings? Okay, make the strings really brittle in multiple places so as the rock candies grow heavier, they break off into smaller chunks.

Balance a delicate lego construction on an unstable surface with a loud woofer in the room. That's likely to turn from 1 lego object into hundreds of lego objects.

You could disguise a factory inside the room and have it turn a bucket of dense material into many less dense and therefore much larger objects, making the room appear empty at first and full later on. It could produce balloons from a block of latex for instance.

Place a set of ice cubes in a bucket. Twelve ice cubs becomes one bucket of water. Fifty ice sculptures can become one indoor pool.

Using concepts like reproduction, deconstruction, production, liquefying and crystallization, how much might one be able to really confuse a person with pranks designed to make it appear as though objects have entered or left a room?

Replies from: DaFranker
comment by DaFranker · 2012-08-21T13:51:31.131Z · LW(p) · GW(p)

I haven't read HPMoR, and I certainly haven't read the specific scene(s) in question, but inferring from what I expect Eliezer would have wanted to write in such a situation, I'm going with the prior assumption that that's not at all what he meant.

Consider this scenario for perspective:

There are ten objects in an otherwise utterly empty, blank cubic room with while walls and a doorknob to open a panel of one wall. You can see every object from every point in the room (unless you're really tiny and hide behind one of the objects). You know exactly which objects there are, what they are, and what they do. You count them. There are nine objects. What?! You double-check. You still know all the ten objects, and all twelve of them are still there. They add up to twelve when counted. Wait, what's that? Weren't there ten at first? No, you're sure, you just counted them, you're positive all objects are there, and there are six of them. Oh well, let's just leave and do something more productive.

Basically, it's not about the number of objects being different. It's that the laws of counting themselves stop functioning altogether, such that the very same objects add up to a different amount of objects each time they are counted. It's a ridiculous sillyness of logical impossibility.

Replies from: Raemon
comment by Raemon · 2012-08-21T14:59:17.593Z · LW(p) · GW(p)

this is what was intended, but my first (and second and third) guess would be my brain has been compromised, not that reality has broken.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-21T22:38:09.605Z · LW(p) · GW(p)

Same here. On the third attempt, I'd just tell myself “OK, I clearly need to go to bed now.”

comment by JQuinton · 2012-08-15T21:35:56.509Z · LW(p) · GW(p)

Evil doesn't worry about not being good

  • from the video game "Dragon Age: Origins" spoken by the player.

Not sure if this is a "rationality" quote in and of itself; maybe a morality quote?

comment by FiftyTwo · 2012-08-09T19:06:09.546Z · LW(p) · GW(p)

[Meta] This post doesn't seem to be tagged 'quotes,' making it less convenient to move from it to the other quote threads.

Replies from: Alejandro1
comment by Alejandro1 · 2012-08-20T20:11:13.771Z · LW(p) · GW(p)

Done (and sorry for the long delay).

comment by David_Gerard · 2012-08-04T17:08:58.015Z · LW(p) · GW(p)

Fiction is a branch of neurology.

-- J. G. Ballard (in a "what I'm working on" essay from 1966.)

Replies from: None
comment by [deleted] · 2012-08-04T18:09:52.061Z · LW(p) · GW(p)

.

Replies from: army1987, David_Gerard
comment by David_Gerard · 2012-08-04T19:25:08.225Z · LW(p) · GW(p)

Ballard does note later in the same essay "Neurology is a branch of fiction."

Replies from: arundelo
comment by arundelo · 2012-08-04T19:46:23.000Z · LW(p) · GW(p)

I am a strange loop and so can you!

comment by harshhpareek · 2012-08-03T16:22:53.029Z · LW(p) · GW(p)

To develop mathematics, one must always labor to substitute ideas for calculations.

-- Dirichlet

(Don't have source, but the following paper quotes it : Prolegomena to Any Future Qualitative Physics )

comment by lukeprog · 2012-08-31T20:05:38.318Z · LW(p) · GW(p)

A principal object of Wald's [statistical decision theory] is then to characterize the class of admissible strategies in mathematical terms, so that any such strategy can be found by carrying out a definite procedure... [Unfortunately] an 'inadmissible' decision may be overwhelmingly preferable to an 'admissible' one, because the criterion of admissibility ignores prior information — even information so cogent that, for example, in major medical... safety decisions, to ignore it would put lives in jeopardy and support a charge of criminal negligence.

...This illustrates the folly of inventing noble-sounding names such as 'admissible' and 'unbiased' for principles that are far from noble; and not even fully rational. In the future we should profit from this lesson and take care that we describe technical conditions by names that are... morally neutral, and so do not have false connotations which could mislead others for decades, as these have.

E.T. Jaynes, from page 409 of PT: LoS.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-01T03:24:38.863Z · LW(p) · GW(p)

This illustrates the folly of inventing noble-sounding names such as 'admissible' and 'unbiased' for principles that are far from noble; and not even fully rational.

You mean such as 'rational'.

comment by lukeprog · 2012-08-29T21:37:32.926Z · LW(p) · GW(p)

Ignorance is preferable to error and he is less remote from the truth who believes nothing than he who believes what is wrong.

Thomas Jefferson

Replies from: None
comment by [deleted] · 2012-08-29T21:58:45.004Z · LW(p) · GW(p)

I wonder how we could empirically test this. We could see who makes more accurate predictions, but people without beliefs about something won't make predictions at all. That should probably count as a victory for wrong people, so long as they do better than chance.

We could also test how quickly people learn the correct theory. In both cases, I expect you'd see some truly deep errors which are worse than ignorance, but that on the whole people in error will do quite a lot better. Bad theories still often make good predictions, and it seems like it would be very hard, if not impossible, to explain a correct theory of physics to someone who has literally no beliefs about physics.

I'd put my money on people in error over the ignorant.

comment by Richard_Kennaway · 2012-08-20T09:19:07.824Z · LW(p) · GW(p)

Man likes complexity. He does not want to take only one step; it is more interesting to look forward to millions of steps. The one who is seeking the truth gets into a maze, and that maze interests him. He wants to go through it a thousand times more. It is just like children. Their whole interest is in running about; they do not want to see the door and go in until they are very tired. So it is with grown-up people. They all say that they are seeking truth, but they like the maze. That is why the mystics made the greatest truths a mystery, to be given only to the few who were ready for them, letting the others play because it was the time for them to play.

Hazrat Inayat Khan.

comment by tastefullyOffensive · 2012-08-06T03:22:08.965Z · LW(p) · GW(p)

A lie, repeated a thousand times, becomes a truth. --Joseph Goebbels, Nazi Minister of Propaganda

Replies from: metatroll
comment by metatroll · 2012-08-06T04:35:43.710Z · LW(p) · GW(p)

It does not! It does not! It does not! ... continued here

comment by lukeprog · 2012-08-05T03:26:02.188Z · LW(p) · GW(p)

He who knows best, best knows how little he knows.

Thomas Jefferson

comment by MichaelHoward · 2012-08-03T00:33:29.919Z · LW(p) · GW(p)

Intellectuals solve problems, geniuses prevent them.

-- [Edit: Probably not] Albert Einstein

Replies from: ChristianKl, MixedNuts
comment by ChristianKl · 2012-08-03T08:48:56.631Z · LW(p) · GW(p)

Do you have a source? Einstein gets quoted quite a lot for stuff he didn't say.

Replies from: Jayson_Virissimo, MichaelHoward
comment by Jayson_Virissimo · 2012-08-08T03:55:45.186Z · LW(p) · GW(p)

Do you have a source? Einstein gets quoted quite a lot for stuff he didn't say.

Yes, and even more annoyingly, he gets quoted on things of which he is a non-expert and has nothing interesting to say (politics, psychology, ethics, etc...).

comment by MichaelHoward · 2012-08-03T11:17:13.020Z · LW(p) · GW(p)

Hmm. There are hundreds of thousands of pages asserting that he said it but for some reason I can't find a single reference to it's context.

Thanks. Have edited the quote.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-08-03T18:35:13.241Z · LW(p) · GW(p)

For future reference: wikiquote gives quotes with context.

Replies from: MichaelHoward
comment by MichaelHoward · 2012-08-03T19:48:35.737Z · LW(p) · GW(p)

Thanks, I already plugged them :)

comment by MixedNuts · 2012-08-10T08:42:17.422Z · LW(p) · GW(p)

Genii seem to create problems. They prevent some in the process, and solve others, but that's not what they're in for: it's not nearly as fun.

comment by A1987dM (army1987) · 2012-08-29T22:57:28.383Z · LW(p) · GW(p)

Inside every non-Bayesian, there is a Bayesian struggling to get out.

Dennis Lindley

(I've read plenty of authors who appear to have the intuition that probabilities are epistemic rather than ontological somewhere in the back --or even the front-- of their mind, but appear to be unaware of the extent to which this intuition has been formalised and developed.)

comment by lukeprog · 2012-09-01T05:58:13.349Z · LW(p) · GW(p)

Suppose we carefully examine an agent who systematically becomes rich [that is, who systematically "wins" on decision problems], and try hard to make ourselves sympathize with the internal rhyme and reason of his algorithm. We try to adopt this strange, foreign viewpoint as though it were our own. And then, after enough work, it all starts to make sense — to visibly reflect new principles appealing in their own right. Would this not be the best of all possible worlds? We could become rich and have a coherent viewpoint on decision theory. If such a happy outcome is possible, it may require we go along with prescriptions that at first seem absurd and counterintuitive (but nonetheless make agents rich); and, rather than reject such prescriptions out of hand, look for underlying coherence — seek a revealed way of thinking that is not an absurd distortion of our intuitions, but rather, a way that is principled though different. The objective is not just to adopt a foreign-seeming algorithm in the expectation of becoming rich, but to alter our intuitions and find a new view of the world — to not only see the light, but also absorb it into ourselves.

Yudkowsky, Timeless Decision Theory

comment by lukeprog · 2012-08-30T22:11:01.398Z · LW(p) · GW(p)

David Hume lays out the foundations of decision theory in A Treatise of Human Nature (1740):

...'tis only in two senses, that any affection can be call'd unreasonable. First, when a passion, such as hope or fear, grief or joy, despair or security, is founded on the supposition of the existence of objects which really do not exist. Secondly, when in exerting any passion in action, we chuse means insufficient for the design'd end, and deceive ourselves in our judgment of causes and effects.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-31T02:02:31.213Z · LW(p) · GW(p)

This seems to omit the possibility of akrasia.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-01T16:08:50.861Z · LW(p) · GW(p)

Doesn't

Secondly, when in exerting any passion in action, we chuse means insufficient for the design'd end, and deceive ourselves in our judgment of causes and effects.

cover that?

comment by itaibn0 · 2012-08-13T00:27:33.730Z · LW(p) · GW(p)

I fear perhaps thou deemest that we fare
An impious road to realms of thought profane;
But 'tis that same religion oftener far
Hath bred the foul impieties of men:
As once at Aulis, the elected chiefs,
Foremost of heroes, Danaan counsellors,
Defiled Diana's altar, virgin queen,
With Agamemnon's daughter, foully slain.
She felt the chaplet round her maiden locks
And fillets, fluttering down on either cheek,
And at the altar marked her grieving sire,
The priests beside him who concealed the knife,
And all the folk in tears at sight of her.
With a dumb terror and a sinking knee
She dropped; nor might avail her now that first
'Twas she who gave the king a father's name.
They raised her up, they bore the trembling girl
On to the altar—hither led not now
With solemn rites and hymeneal choir,
But sinless woman, sinfully foredone,
A parent felled her on her bridal day,
Making his child a sacrificial beast
To give the ships auspicious winds for Troy:
Such are the crimes to which Religion leads.

Lucrecius, De rerum natura

Replies from: itaibn0
comment by itaibn0 · 2012-08-13T00:33:49.794Z · LW(p) · GW(p)

How do you make newlines work inside quotes? The formatting when I made this comment is bad.

Replies from: arundelo
comment by arundelo · 2012-08-13T00:42:41.837Z · LW(p) · GW(p)

This paragraph is above a line with nothing but a greater-than sign.

This paragraph is below a line with nothing but a greater-than sign.

This is the same as if you wrote it without the greater-than sign then added a greater-than sign to the beginning of each line.

(If you want a line break without a paragraph break, end a line with two spaces.)

Replies from: itaibn0
comment by itaibn0 · 2012-08-13T00:49:07.061Z · LW(p) · GW(p)

Thanks.

comment by Aurora · 2012-08-11T03:35:41.229Z · LW(p) · GW(p)

Who taught you that senseless self-chastisement? I give you the money and you take it! People who can't accept a gift have nothing to give themselves. » -De Gankelaar (Karakter 1997)

comment by Aurora · 2012-08-11T03:23:08.912Z · LW(p) · GW(p)

Nulla è più raro al mondo, che una persona abitualmente sopportabile. -Giacomo Leopardi

(Nothing more rare in the world than a person who is habitually bearable)

comment by Epiphany · 2012-08-19T01:21:53.822Z · LW(p) · GW(p)

Nevermind.

comment by MichaelHoward · 2012-08-03T11:39:00.865Z · LW(p) · GW(p)

Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not to add to the quote, but you should be able to find it if challenged.

wikiquote.org seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).

comment by Nisan · 2012-08-03T07:24:22.370Z · LW(p) · GW(p)

Some say (not without a trace of mockery) that the old masters would supposedly forever invest a fraction of their souls in each batch of mithril, and since today there are no souls, but only the ‘objective reality perceived by our senses,’ by definition we have no chance to obtain true mithril.

-Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-08-03T09:42:06.495Z · LW(p) · GW(p)

Context, please?

Replies from: Nisan, gwern
comment by Nisan · 2012-08-03T16:40:36.341Z · LW(p) · GW(p)

Mithril is described as an alloy with near-miraculous properties, produced in ancient times, which cannot be reproduced nowadays, despite the best efforts of modern metallurgy. The book is a work of fiction.

Replies from: Vaniver
comment by Vaniver · 2012-08-03T20:50:33.962Z · LW(p) · GW(p)

Alternatively, mithril is aluminum, almost unobtainable in ancient times and thus seen as miraculous. Think about that the next time you crush a soda can.

Replies from: RobinZ, None
comment by RobinZ · 2012-08-09T18:00:09.318Z · LW(p) · GW(p)

(belated...)

Incidentally, in many cases modern armor is made of aluminum, because aluminum (being less rigid) can dissipate more energy without failing. A suit of chain mail made of aircraft-grade aluminum would seem downright magical a few centuries ago.

comment by [deleted] · 2012-08-09T18:43:14.390Z · LW(p) · GW(p)

Aluminum was entirely unobtainable in ancient times, I believe. It fuses with carbon as well as oxygen, so there was no way to refine it. And it would have made terrible armor, being quite a lot softer than steel. It also suffers from fatigue failures much more easily than steel. These are some of the reasons it makes a bad, though cheap, material for bikes.

Replies from: Vaniver
comment by Vaniver · 2012-08-09T19:48:10.447Z · LW(p) · GW(p)

Pure aluminum can be found without reducing it yourself, but it's very rare. You'd have to pluck it out of the interior of a volcano or the bottom of the sea- and so it seems possible that some could end up in the hands of a medieval smith, but very unlikely.

Replies from: gwern
comment by gwern · 2012-08-09T21:54:59.966Z · LW(p) · GW(p)

Oh, I don't know, one would say the same thing about meteoritic iron, and yet there are well documented uses of it.

(Although apparently the Sword of Attila wasn't really meteoritic and I got that from fiction.)

comment by gwern · 2012-08-03T16:20:37.833Z · LW(p) · GW(p)

I dunno. I read The Last Ringbearer (pretty good, although I have mixed feelings about it in general), but it doesn't seem interesting to me either.

comment by NancyLebovitz · 2012-08-13T00:07:21.124Z · LW(p) · GW(p)

My favorite fantasy is living forever, and one of the things about living forever is all the names you could drop.

Roz Kaveny

comment by roland · 2012-08-03T20:47:38.592Z · LW(p) · GW(p)

However, the facile explanations provided by the left brain interpreter may also enhance the opinion of a person about themselves and produce strong biases which prevent the person from seeing themselves in the light of reality and repeating patterns of behavior which led to past failures. The explanations generated by the left brain interpreter may be balanced by right brain systems which follow the constraints of reality to a closer degree.

comment by Aurora · 2012-08-11T03:16:45.042Z · LW(p) · GW(p)

I know by experience that I'm not able to endure the presence of a single person for more than three hours. After this period, I lost lucidity, become obfuscated and end up irritated or sunk in a deep depression" -Julio Ramon Ribeyro

comment by tastefullyOffensive · 2012-08-04T23:30:32.355Z · LW(p) · GW(p)

If you wish to make an apple pie from scratch you must first invent the universe. --Carl Sagan

Replies from: MatthewBaker
comment by MatthewBaker · 2012-08-10T19:51:28.117Z · LW(p) · GW(p)

SPACE!

comment by lukeprog · 2012-08-02T22:46:49.502Z · LW(p) · GW(p)

[retracted; off-topic]

Replies from: RobertLumley, lukeprog
comment by RobertLumley · 2012-08-03T14:12:36.123Z · LW(p) · GW(p)

Am I the only one who is confused why this comment (currently at -3) has drifted to the top when sorted with the new "Best" algorithm? That seems to be either a mistake or a bad algorithm.

Replies from: harshhpareek, army1987
comment by harshhpareek · 2012-08-03T16:27:40.952Z · LW(p) · GW(p)

Not necessarily a bad algorithm. This is possible if it uses your karma as a factor, which is in general not a bad idea (in this case countered by the collapsing negative scores thing)

Replies from: RobertLumley
comment by RobertLumley · 2012-08-03T18:05:41.471Z · LW(p) · GW(p)

I don't understand what you mean, specifically about "my karma" as a factor. Can you give an example? Do you mean whether or not I personally upvoted it? Or my personal karma score? I can't see how either would be particularly relevant. Regardless, if the former is what you meant, I have not voted on the original comment.

Replies from: DaFranker, harshhpareek
comment by DaFranker · 2012-08-03T18:22:30.094Z · LW(p) · GW(p)

He didn't mean your karma specifically, but Arbitrary Hypothetical Second-Person Comment-Poster X's karma.

For example, suppose E.Y. were to post, for whatever reason (cat jumping on keyboard?), a really pointless, flawed comment that everyone actually downvoted. If the algorithm takes into account the poster's total Karma in some proportional manner not implementing any diminutive return strategy, then E.Y.'s downvoted comment would still be at the top unless it received an amount of downvotes that seems almost impossible to obtain using only the current active LW readership.

Replies from: RobertLumley, MichaelHoward, DanielLC, thomblake
comment by RobertLumley · 2012-08-03T18:31:26.312Z · LW(p) · GW(p)

Ah. That makes sense. I don't know why that didn't occur to me. Regardless, I don't think it does, based on explanations here and here.

comment by MichaelHoward · 2012-08-03T21:10:19.762Z · LW(p) · GW(p)

suppose E.Y. were to post, for whatever reason (cat jumping on keyboard?)...

This happened once (F12 was mapped to that set of keystrokes at the time).

Replies from: army1987, DaFranker
comment by A1987dM (army1987) · 2012-08-04T00:34:55.122Z · LW(p) · GW(p)

BTW, who was it who had a script to sort all the comments by a user by karma? wedrifid?

Replies from: Alicorn
comment by Alicorn · 2012-08-04T00:38:49.344Z · LW(p) · GW(p)

Wei Dai's thing will do that - click "points" at the top after loading the whole page.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-04T00:41:42.402Z · LW(p) · GW(p)

Yes, it was that one I was thinking about. Thanks.

comment by DaFranker · 2012-08-03T22:40:17.976Z · LW(p) · GW(p)

Haha! Thanks, great example.

comment by DanielLC · 2012-08-10T20:34:22.771Z · LW(p) · GW(p)

I'd say that using it in a proportional manner not implementing any diminutive return strategy would be a mistake and a bad algorithm.

comment by thomblake · 2012-08-03T20:10:05.743Z · LW(p) · GW(p)

For example, suppose E.Y. were to post, for whatever reason (cat jumping on keyboard?), a really pointless, flawed comment that everyone actually downvoted...

Irrelevant in practice, since in that scenario the comment would be massively upvoted.

Replies from: Dallas, RobertLumley
comment by Dallas · 2012-08-03T20:12:46.044Z · LW(p) · GW(p)

Yudkowsky's been downvoted before; the most notable time in recent memory was probably removing the link to the NY Observer article.

Replies from: thomblake
comment by thomblake · 2012-08-03T20:20:17.285Z · LW(p) · GW(p)

I think I misread that comment as Eliezer posting a picture or video of a cat jumping on the keyboard.

Replies from: thomblake
comment by harshhpareek · 2012-08-03T20:05:34.814Z · LW(p) · GW(p)

I meant lukeprog's karma, i.e. the poster of a comment influences how good the comment is.

Replies from: RobertLumley
comment by RobertLumley · 2012-08-03T20:14:46.430Z · LW(p) · GW(p)

DaFranker clarified this. Thanks.

comment by A1987dM (army1987) · 2012-08-04T00:02:16.806Z · LW(p) · GW(p)

I see negative-scored comments in http://lesswrong.com/topcomments/?t=day pretty often. Likewise, sometimes sorting comments by “Old” mis-sorts some of them (which is somewhat confusing when reading comments to old posts imported from Overcoming Bias).

comment by lukeprog · 2012-08-02T22:54:55.301Z · LW(p) · GW(p)

[retracted; off-topic]