Rationality Quotes from people associated with LessWrong

post by ChristianKl · 2013-07-29T13:19:32.677Z · LW · GW · Legacy · 62 comments

The other rationality quotes thread operates under the rule:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

Lately it seems that every MIRI or CFAR employee is excempt from being quoted.

As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes. 

 

62 comments

Comments sorted by top scores.

comment by BT_Uytya · 2013-07-30T22:30:25.325Z · LW(p) · GW(p)

The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.

The second terrifying shock comes when you realize that they're not the only ones.

-- Nominull3 here, nearly six-years old quote

comment by Tenoke · 2013-07-29T15:44:15.182Z · LW(p) · GW(p)

"Goedel's Law: as the length of any philosophical discussion increases, the probability of someone incorrectly quoting Goedel's Incompleteness Theorem approaches 1"

--nshepperd on #lesswrong

Replies from: sketerpot, army1987, Anatoly_Vorobey
comment by sketerpot · 2013-07-30T04:24:13.147Z · LW(p) · GW(p)

There's a theorem which states that you can never truly prove that.

comment by A1987dM (army1987) · 2013-07-30T12:56:31.261Z · LW(p) · GW(p)

The probability that someone will say bullshit about quantum mechanics approaches 1 even faster.

Replies from: Benito, Fyrius
comment by Ben Pace (Benito) · 2013-07-30T18:48:15.949Z · LW(p) · GW(p)

At least, the possible worlds in which they don't start collapsing... Or something...

comment by Fyrius · 2016-04-13T23:14:50.665Z · LW(p) · GW(p)

I love that 'bullshit' is now an academic term.

comment by Anatoly_Vorobey · 2013-07-29T22:50:53.590Z · LW(p) · GW(p)

That doesn't say much; perhaps it approaches 1 as 1 - 1/(1+1/2+1/3...+1/n)?

Replies from: D_Alex
comment by D_Alex · 2013-07-30T06:08:45.580Z · LW(p) · GW(p)

I like your example, it implies that the longer the discussion goes, the less likely it is that somebody misquotes G.I.T. in any given statement (or per unit time etc). Kinda the opposite of what the intent of the original quote seems to be.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T06:55:42.147Z · LW(p) · GW(p)

Yea, but it's clear what he's trying to convey: For any event that has some (fixed) episolon>0 probability of happening, it's gonna happen eventually if you give it enough chances. Trivially includes the mentioning of Gödel's incompleteness theorems.

However, it's also clear what the intent of the original quote was. The pedantry in this case is fair game, since the quote, in an attempt to sound sharp and snappy and relevant, actually obscures what it's trying to say: that Gödel is brought up way too often in philosophical discussions.

Edit: Removed link, wrong reference.

Replies from: D_Alex
comment by D_Alex · 2013-07-30T09:45:38.502Z · LW(p) · GW(p)

For any event that has some episolon>0 probability of happening, it's gonna happen eventually if you give it enough chances.

This is not true (and also you mis-apply the Law of large Numbers here). For example: in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it. Even if the number of coin tosses is infinite (whatever that might mean).

Interestingly, I read the original quote differently from you - I thought the intent was to say "any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.", rather than "Gödel is brought up way too often in philosophical discussions". What did you really mean, nsheppered???

Replies from: Tenoke, Kawoomba
comment by Tenoke · 2013-07-30T10:05:38.579Z · LW(p) · GW(p)

Interestingly, I read the original quote differently from you - I thought the intent was to say "any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.", rather than "Gödel is brought up way too often in philosophical discussions". What did you really mean, nsheppered???

It was the latter. Also I am assuming that you haven't heard of Godwin's law which is what the wording here references.

comment by Kawoomba · 2013-07-30T09:55:30.063Z · LW(p) · GW(p)

in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it.

... any event for which you don't change the epsilon such that the sum becomes a convergent series. Or any process with a Markov property. Or any event with a fixed epsilon >0.

That should cover round about any relevant event.

(and also you mis-apply the Law of large Numbers here)

Explain.

Replies from: BT_Uytya
comment by BT_Uytya · 2013-07-31T06:58:40.029Z · LW(p) · GW(p)

Law of Large Numbers states that sum of a large amount of i.i.d variables approaches its mathematical expectation. Roughly speaking, "big samples reliably reveal properties of population".

It doesn't state that "everything can happen in large samples".

Replies from: Kawoomba
comment by Kawoomba · 2013-07-31T08:15:25.975Z · LW(p) · GW(p)

Thanks. Memory is more fragile than thought, wrong folder. Updated.

comment by Fhyve · 2013-07-30T04:40:27.548Z · LW(p) · GW(p)

"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"

-- Cat Lavigne at the July 2013 CFAR workshop

Replies from: Kawoomba, wedrifid
comment by Kawoomba · 2013-07-30T06:27:18.733Z · LW(p) · GW(p)

If idiots do exist, and you have reason to conclude that someone is an idiot, then you shouldn't deny that conclusion -- at least when you subscribe to an epistemic primacy: that forming true beliefs takes precedence over other priorities.

The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"

Replies from: JGWeissman, Richard_Kennaway, Fhyve, ChristianKl, MugaSofer
comment by JGWeissman · 2013-07-30T07:08:18.854Z · LW(p) · GW(p)

That quote summarizes a good amount of material from a CFAR class, and presented in isolation, the intended meaning is not as clear.

The idea is that people are too quick to dismiss people they disagree with as idiots, not really forming accurate beliefs, or even real anticipation controlling beliefs. So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.

Replies from: Lumifer
comment by Lumifer · 2013-07-30T18:03:18.968Z · LW(p) · GW(p)

So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.

Having spent some time on the 'net I can boast of considerable experience of arguing with idiots.

My experience tells me that it's highly useful to determine whether one you're arguing with is an idiot or not as soon as possible. One reason is that it makes it clear whether the conversation will evolve into an interesting direction or into the kicks-and-giggles direction. It is quite rare for me to take an interest in where a 'net idiot is coming from or what his motivations are -- because there are so many of them.

Oh, and the criteria for idiotism are not what one believes and whether his beliefs match mine. The criteria revolve around ability (or inability) to use basic logic, tendency to hysterics, competency in reading comprehension, and other things like that.

Replies from: None
comment by [deleted] · 2015-03-12T14:46:06.059Z · LW(p) · GW(p)

Yes, but fishing out non-idiots from say Reddit's front page is rather futile. Non-idiots tend to flee from idiots anyway, so just go where the refugees generally go to.

Replies from: Lumifer
comment by Lumifer · 2015-03-12T15:33:00.665Z · LW(p) · GW(p)

LW as a refugee camp... I guess X-D

comment by Richard_Kennaway · 2013-07-31T11:25:11.859Z · LW(p) · GW(p)

The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"

That can be a useful method of learning. Pretend it's different, act accordingly, and observe the results.

comment by Fhyve · 2013-07-30T16:45:49.748Z · LW(p) · GW(p)

This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"

Even if they aren't very smart, it is better to frame them as someone who isn't very smart rather than a directly derogatory term "idiot."

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T17:14:04.694Z · LW(p) · GW(p)

This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"

(Certainly not my criterion, nor that of the LW herd/caravan/flock, a couple stragglers possibly excepted.)

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2013-09-09T09:24:24.804Z · LW(p) · GW(p)

a couple stragglers possibly excepted.

I think you missed a trick here...

comment by ChristianKl · 2013-07-30T10:19:11.022Z · LW(p) · GW(p)

The term 'idiot' contains a value judgement that a certain person isn't worth arguing with. It's more than just seeing the other person has having an IQ of 70.

Trying to understand the world view of someone with an IQ of 70 might still provide for an interesting conversation.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T10:31:37.430Z · LW(p) · GW(p)

The term 'idiot' contains a value judgement that a certain person isn't worth arguing with.

Except that often it can't be avoided/ is "worth" it if only for status/hierarchy squabbling reasons (i.e. even when the arguments' contents don't matter).

Replies from: ChristianKl
comment by ChristianKl · 2013-07-30T10:36:03.177Z · LW(p) · GW(p)

Except that often it can't be avoided/ is "worth" it if only for status/hierarchy squabbling reasons (i.e. even when the arguments' contents don't matter).

That's why it's not a good idea to think of others as idiots.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T10:39:11.106Z · LW(p) · GW(p)

Indeed, just as it can be smart to "forget" when you have a terminal condition. The "pretend it's different" from my ancestor comment sometimes works fine from an instrumental rationality perspective, just not from an epistemic one.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-30T10:42:50.569Z · LW(p) · GW(p)

Whether someone is worth arguing with is a subjective value judgement.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T10:44:05.926Z · LW(p) · GW(p)

And given your values you'd ideally arrive at those through some process other than the one you use to judge, say, a new apartment?

Replies from: ChristianKl
comment by ChristianKl · 2013-07-30T10:54:22.176Z · LW(p) · GW(p)

I think that trying to understand the worldview of people who are very different from you is often useful.

Trying to explain ideas in a way that you never explained them before can also be useful.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T17:53:43.683Z · LW(p) · GW(p)

I agree. I hope I didn't give the impression that I didn't. Usefulness belongs to instrumental rationality more so than to epistemic rationality.

comment by MugaSofer · 2013-07-31T14:57:32.148Z · LW(p) · GW(p)

That's ... not quite what "framing" means.

comment by wedrifid · 2013-07-30T16:24:04.615Z · LW(p) · GW(p)

"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"

I predict the opposite effect. Framing idiots as idiots tends to reduce the amount that you end up arguing (or otherwise interacting) with them. If a motivation for not framing people as idiots is required look elsewhere.

comment by Halfwitz · 2013-07-29T16:28:29.347Z · LW(p) · GW(p)

This doesn't look as bad as it looks like it looks.

Qiaochu_Yuan

Replies from: BlindIdiotPoster
comment by Discredited · 2013-07-30T09:43:21.271Z · LW(p) · GW(p)

"Taking up a serious religion changes one's very practice of rationality by making doubt a disvalue." ~ Orthonormal

comment by Jayson_Virissimo · 2013-08-07T02:50:30.384Z · LW(p) · GW(p)

The Arguments From My Opponent Believes Something are a lot like accusations of arrogance. They’re last-ditch attempts to muddy up the waters. If someone says a particular theory doesn’t explain everything, or that it’s elitist, or that it’s being turned into a religion, that means they can’t find anything else.

Otherwise they would have called it wrong.

-- Scott Alexander, On first looking into Chapman’s “Pop Bayesianism”

comment by Jayson_Virissimo · 2013-07-31T23:20:15.675Z · LW(p) · GW(p)

I strongly recommend not using stupid.

-- NancyLebovitz

You can find the comment here, but it is even better when taken completely out of context.

comment by Stabilizer · 2013-10-03T21:01:50.034Z · LW(p) · GW(p)

Now that we've beaten up on these people all over the place, maybe we should step up to the plate and say "how can we do better?".

-Robin Hanson, on a Blogginheads.tv conversation with Daniel Sarewitz. Sarewitz was spending a lot of time criticizing naive views which many smart people hold about human enhancement.

comment by roland · 2013-07-31T18:11:15.467Z · LW(p) · GW(p)

Experience is the result of using the computing power of reality.

-- Roland

Replies from: None
comment by [deleted] · 2013-08-02T21:59:42.366Z · LW(p) · GW(p)

Quoting yourself is probably a bit too euphoric even for this thread.

comment by Fyrius · 2016-04-14T09:34:41.107Z · LW(p) · GW(p)

Humans are not adapted for the task of scientific research. Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

It's amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics.

Not a very advanced idea, and most people here probably already realised it -- I did too -- but this essay uniquely managed to strike me with the full weight of just how massive the gap really is.

I used to think "human brains aren't natively made for this stuff, so just take your biases into account and then you're good to go". I did not think "my god, we are so ridiculously underequipped for this."

comment by Larks · 2013-07-29T14:24:27.215Z · LW(p) · GW(p)

Perhaps the rule should be "Rationality Quotes from people associated with LessWrong that they made elsewhere", which would be useful, but not simply duplicate other parts of LW.

Replies from: ciphergoth, David_Gerard
comment by Paul Crowley (ciphergoth) · 2013-07-29T15:42:41.931Z · LW(p) · GW(p)

I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.

Replies from: NancyLebovitz, wedrifid
comment by NancyLebovitz · 2013-07-29T17:43:19.737Z · LW(p) · GW(p)

How about a waiting period? I'm thinking that quotes from LW have to be at least 3 years old. It's way of keeping good quotes from getting lost in the past while not having too much redundancy here.

Replies from: tim
comment by tim · 2013-07-29T19:40:13.007Z · LW(p) · GW(p)

I think three years is too long. I would imagine that there are a large number of useful quotes that are novel to many users that are much less than three years old.

Personally I would say we should just let it ride as is with no restrictions. If redundancy and thread bloat become noticeable issues then yeah, we might want to set up a minimum age for contributions.

comment by wedrifid · 2013-07-31T02:18:30.390Z · LW(p) · GW(p)

I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.

This would be ideal. I like the notion of having a place for excellent rationalist quotes but like having the "non-echo chamber" rationality quotes page too.

comment by David_Gerard · 2013-07-30T13:14:23.218Z · LW(p) · GW(p)

I think let's see what happens.

comment by anandjeyahar · 2013-10-03T14:35:10.349Z · LW(p) · GW(p)

It is tempting but false to regard adopting someone else's beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise. Therefore it is written "Do not believe you do others a favor if you accept their arguments; the favour is to you." -- Eliezer Yudkowsky

comment by Will_Newsome · 2013-08-01T10:13:53.189Z · LW(p) · GW(p)

For a self-modifying AI with causal validity semantics, the presence of a particular line of code is equivalent to the historical fact that, at some point, a human wrote that piece of code. If the historical fact is not binding, then neither is the code itself. The human-written code is simply sensory information about what code the humans think should be written.

— Eliezer Yudkowsky, Creating Friendly AI

Replies from: Will_Newsome
comment by Will_Newsome · 2013-08-01T10:15:07.886Z · LW(p) · GW(p)

The rule of derivative validity—“Effects cannot have greater validity than their causes.”—contains a flaw; it has no tail-end recursion. Of course, so does the rule of derivative causality—“Effects have causes”—and yet, we’re still here; there is Something rather than Nothing. The problem is more severe for derivative validity, however. At some clearly defined point after the Big Bang, there are no valid causes (before the rise of self-replicating chemicals on Earth, say); then, at some clearly defined point in the future (i.e., the rise of homo sapiens sapiens) there are valid causes. At some point, an invalid cause must have had a valid effect. To some extent you might get around this by saying that, [e.g.], self-replicating chemicals or evolved intelligences are pattern-identical with (represent) some Platonic valid cause—a low-entropy cause, so that evolved intelligences in general are valid causes—but then there would still be the question of what validates the Platonic cause. And so on.

— Eliezer Yudkowsky, Creating Friendly AI

Replies from: Will_Newsome
comment by Will_Newsome · 2013-08-01T10:24:15.464Z · LW(p) · GW(p)

I have an intuition that there is a version of reflective consistency which requires R to code S so that, if R was created by another agent Q, S would make decisions using Q's beliefs even if Q's beliefs were different from R's beliefs (or at least the beliefs that a Bayesian updater would have had in R's position), and even when S or R had uncertainty about which agent Q was. But I don't know how to formulate that intuition to something that could be proven true or false. (But ultimately, S has to be a creator of its own successor states, and S should use the same theory to describe its relation to its past selves as to describe its relation to R or Q. S's decisions should be invariant to the labeling or unlabeling of its past selves as "creators". These sequential creations are all part of the same computational process.)

— Steve Rayhawk, commenting on Wei Dai's "Towards a New Decision Theory"

Replies from: Will_Newsome
comment by Will_Newsome · 2013-08-01T10:30:11.766Z · LW(p) · GW(p)

Yes, any physical system could be subverted with a sufficiently unfavorable environment. You wouldn't want to prove perfection. The thing you would want to prove would be more along the lines of, "will this system become at least somewhere around as capable of recovering from any disturbances, and of going on to achieve a good result, as it would be if its designers had thought specifically about what to do in case of each possible disturbance?". (Ideally, this category of "designers" would also sort of bleed over in a principled way into the category of "moral constituency", as in CEV.) Which, in turn, would require a proof of something along the lines of "the process is highly likely to make it to the point where it knows enough about its designers to be able to mostly duplicate their hypothetical reasoning about what it should do, without anything going terribly wrong".

We don't know what an appropriate formalization of something like that would look like. But there is reason for considerable hope that such a formalization could be found, and that this formalization would be sufficiently simple that an implementation of it could be checked. This is because a few other aspects of decision-making which were previously mysterious, and which could only be discussed qualitatively, have had powerful and simple core mathematical descriptions discovered for cases where simplifying modeling assumptions perfectly apply. Shannon information was discovered for the informal notion of surprise (with the assumption of independent identically distributed symbols from a known distribution). Bayesian decision theory was discovered for the informal notion of rationality (with assumptions like perfect deliberation and side-effect-free cognition). And Solomonoff induction was discovered for the informal notion of Occam's razor (with assumptions like a halting oracle and a taken-for-granted choice of universal machine). These simple conceptual cores can then be used to motivate and evaluate less-simple approximations for situations where where the assumptions about the decision-maker don't perfectly apply. For the AI safety problem, the informal notions (for which the mathematical core descriptions would need to be discovered) would be a bit more complex -- like the "how to figure out what my designers would want to do in this case" idea above. Also, you'd have to formalize something like our informal notion of how to generate and evaluate approximations, because approximations are more complex than the ideals they approximate, and you wouldn't want to need to directly verify the safety of any more approximations than you had to. (But note that, for reasons related to Rice's theorem, you can't (and therefore shouldn't want to) lay down universally perfect rules for approximation in any finite system.)

Steve Rayhawk

comment by [deleted] · 2015-12-26T04:13:39.990Z · LW(p) · GW(p)

Beware of self fulfilling thoughts are thoughts the truth conditions of which are subsets of the existence conditions

-Luke, Pale Blue Dot

comment by [deleted] · 2015-10-01T01:18:28.143Z · LW(p) · GW(p)

Stirring quotes from this video about the Singularity Institute (MIRI

It's very hard to predict when you're going to get a piece of knowledge you don't have now - EY

paraphrase: nanotechnology is about the future of the material world, ai is about the future of the information world - a female SI advisor with nanotech experience - sounded very intelligent

(speaking about SI [now MIRI] and that they are seen as cutting edge/beyond the pale of respectability): "...in my experience it's only by pushing things beyond the pale of respectability that you get things done and push the dial" - Thiel

paraphrase: universities can only have near term goals (up to 7 years max, usually from 3 to 5 years), so non-profits can have goals of longer term, greater than 10 years - Thiel

IMO synthetic biology constitutes a third domain of advancement - the future of the living world

Replies from: Fyrius
comment by Fyrius · 2016-04-13T23:32:38.201Z · LW(p) · GW(p)

IMO synthetic biology constitutes a third domain of advancement - the future of the living world

Isn't that a subset of the material world? I imagine nanotechnology is going to play a part in medicine and the like too, eventually.
Of course, more than one thing can be about the future of the somethingsomething world.

Replies from: None
comment by [deleted] · 2016-04-14T00:58:48.324Z · LW(p) · GW(p)

Anything is a subset of another thing in one dimension or another.

comment by [deleted] · 2015-09-17T14:26:46.076Z · LW(p) · GW(p)

21st-century Western males are shocked by the idea of rape because it violates cultural assumptions about gentlemanly conduct and the rules of how men compete among themselves for women; so another possibility I was wondering about is if, indeed, men would simply be more shocked by the whole idea than women. It just wasn't clear from the comments whether this was actually the case, or if my female readers were so offended as to not even bother commenting.

EY - Interlude with the Confessor

EY is right by contemporary theories:

Structural inequality encompasses the lower status of women in our community, lower rates of pay, and underrepresentation of women in leadership positions. Societies with greater structural inequality have higher levels of violence against women. Normative inequality refers to attitudes and beliefs that support male dominance and male entitlement. Men who perpetrate violence against women are more likely to hold these attitudes. [2]

Though he's making a very different point, I'd like to point something else out inspired by this piece that I do not feel would fit in with the narrative at the generic thread.

In my opinion, violence against men or intimate partner violence as a gender neutral construct is equally important, but more neglected yet, from a more neutral piece, as tractable as violence against women.

To satisfy anyone's curiosity, I identify neither as a feminist, nor a men's rights activists, nor as a humanist, but a rationalist.

comment by Regex · 2015-09-08T15:42:55.291Z · LW(p) · GW(p)

If I missed something along the line, I'm really willing to learn.

kamenin on Collapse Postulates

comment by almkglor · 2014-09-07T10:31:41.049Z · LW(p) · GW(p)

Jonvon, there is only one human superpower. It makes us what we are. It is our ability to think. Rationality trains this superpower, like martial arts trains a human body. It is not that some people are born with the power and others are not. Everyone has a brain. Not everyone tries to train it. Not everyone realizes that intelligence is the only superpower they will ever have, and so they seek other magics, spells and sorceries, as if any magic wand could ever be as powerful or as precious or as significant as a brain.

Eliezer Yudkowsky