LW/OB Quotes - Fall 2009
post by thomblake · 2009-09-01T15:11:01.113Z · LW · GW · Legacy · 50 commentsContents
50 comments
This is a monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.
- Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
- Do not quote yourself.
- Do not post quotes that are NOT comments/posts on LW/OB - there is a separate thread for this.
- No more than 5 quotes per person per thread, please.
50 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-01T21:59:32.699Z · LW(p) · GW(p)
Meta: I'm not sure we should actually have these threads. Vote up if you agree.
Replies from: Eliezer_Yudkowsky, thomblake↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-01T21:59:49.048Z · LW(p) · GW(p)
Vote this up if you disagree and think we should keep having OBLW quote threads.
Replies from: Alicorn, taw, Dustin↑ comment by Alicorn · 2009-09-02T16:09:14.634Z · LW(p) · GW(p)
Perhaps we could have them less frequently than the regular quote threads - every two or three months, instead of monthly.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-02T16:25:58.711Z · LW(p) · GW(p)
I agree with Alicorn: having an OBLW quote thread is good, but there probably isn't enough quotable stuff to justify having it every month. It should be rarer -- how about the start of every season/quarter? Vote her post up if you prefer this alternative.
(If you say that the term "alternative" should only be used when there are two choices, you're ETC not necessarily a terrorist, but are being excessively pedantic.)
Replies from: thomblake↑ comment by thomblake · 2009-09-02T16:41:14.055Z · LW(p) · GW(p)
Agreed. Tentatively renamed.
And 'alternative' should be used for any number of choices, provided they're mutually exclusive. I must, however, take issue with your usage of 'terrorist'.
Replies from: Alicorn, SilasBarta, SilasBarta↑ comment by Alicorn · 2009-09-02T17:54:51.992Z · LW(p) · GW(p)
And 'alternative' should be used for any number of choices, provided they're mutually exclusive.
For maximum pedanticness, an alternative is one of two or more time-identical, agent-identical actions or sets of actions that the agent in question is able to perform. An alternative set is the complete list of all alternatives available to a given agent for a given span of time.
↑ comment by SilasBarta · 2009-09-02T22:02:19.866Z · LW(p) · GW(p)
Btw, weren't you going to quote me on Allais-type paradoxes? :-(
↑ comment by SilasBarta · 2009-09-02T16:51:27.422Z · LW(p) · GW(p)
Terrorist remark rewritten ;-)
↑ comment by taw · 2009-09-01T23:43:47.383Z · LW(p) · GW(p)
There are extremely few quotes that are not Eliezer's, so I guess we shouldn't have those.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-02T06:35:08.373Z · LW(p) · GW(p)
Yeah, I also find that a touch disturbing. I have a rather large collection of OBLW quotes I've built up over the last few years, and needless to say, none of them are from me.
Replies from: Larks, anonym↑ comment by anonym · 2009-09-03T04:17:59.621Z · LW(p) · GW(p)
That's funny. As if not having collected any quotes from your own writings is good evidence of something other than your having some humility, or that they're already all recorded in your writings, or that you can never really genuinely surprise yourself with an insight (and thus think it eminently quotable) unless you have a really bad memory.
On that topic, have others noticed that people very frequently say the silliest things when trying to be humble, making arguments with huge gaping holes that they could never not notice in practically any other context? It's an extremely common failure mode. I think most of us find it endearing though, which is probably why it's almost never commented on (argued against, perhaps, but not pointed out as 3 standard deviation below our norm stupidity).
comment by anonym · 2009-09-01T15:25:56.612Z · LW(p) · GW(p)
Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don’t have the luxury of starting from scratch.
Peter D. Turney — “No Safe Defense, Not Even Science”
comment by anonym · 2009-09-01T15:22:57.250Z · LW(p) · GW(p)
Part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe — a universe containing no ontologically basic mental things such as souls or magic — and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.
Eliezer S. Yudkowsky — “Mundane Magic”,
comment by Scott Alexander (Yvain) · 2009-09-05T12:55:43.701Z · LW(p) · GW(p)
In my experience, beginning math students simply expect their nice-sounding procedures to work. For example, they expect to be able to add fractions straight across. When you tell them they can’t, they demand to know why they can’t, as though most nice-sounding theorems are true, and if you want to claim that one isn’t, the burden of proof is on you. It is only after students gain considerable mathematical sophistication (or experience getting burned by expectations that don’t pan out) that they place the burden of proofs on the theorems.
-- Anna Salamon
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-05T14:58:56.321Z · LW(p) · GW(p)
Um, keep in mind that that was in the context of Salamon evading an answer to a serious problem with her exposition: specifically, the question of what a could/should/would agent is not.
So the full context was more like:
Critic: What other kinds of agents could there be, besides “could”/“would”/“should” agents?
AnnaSalamon: Come now, AI people get it. Your question is like thinking you can just add fractions straight across.
Replies from: wedrifid↑ comment by wedrifid · 2009-09-15T13:47:24.622Z · LW(p) · GW(p)
Thankyou Silas. I reversed my vote. Since these are quotes from LW the context does matter (to me).
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-15T15:48:22.998Z · LW(p) · GW(p)
And I thank you as well. It's good to know you apply that standard consistently, since you'll probably recall it working against me a few days ago :-)
Replies from: wedrifid↑ comment by wedrifid · 2009-09-15T16:16:56.474Z · LW(p) · GW(p)
If you're talking about the devil's advocate sub-thread I actually up-voted your comments in that context. From what I recall my tongue in cheek replies were actually intended to emphasise and expand upon a point you were making.
(But I may well have replied to other comments of yours without taking particular note of the author.)
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-15T16:28:21.753Z · LW(p) · GW(p)
Oh. My mistake :-/
comment by anonym · 2009-09-01T15:18:40.847Z · LW(p) · GW(p)
There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.
Eliezer S. Yudkowsky — “Quantum Explanations”
Replies from: wedrifid↑ comment by wedrifid · 2009-09-15T14:07:28.927Z · LW(p) · GW(p)
I agreed with that quote until I read it again here. That a model could be surprised by the facts is some credit to the model.
Replies from: Cyan↑ comment by Cyan · 2009-09-15T14:18:54.170Z · LW(p) · GW(p)
Could you expand on that? (Maybe the open thread would be the best place.)
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-15T14:48:53.969Z · LW(p) · GW(p)
wedrifid is alluding to Popper's notion of falsifiability.
http://en.wikipedia.org/wiki/Karl_Popper http://en.wikipedia.org/wiki/Falsifiability
Note that wedrifid's phrase "that a model could [hypothetically] be surprised by the facts" corresponds to "falsifiability", but EY's phrase "a model is surprised by the facts" actually corresponds better to "falsified".
Popper would say that falsifiability is a good thing and falsified is a bad thing (for models), so both wedrifid and EY are good Popperians.
Replies from: Cyan, wedrifid↑ comment by wedrifid · 2009-09-15T16:04:08.282Z · LW(p) · GW(p)
(Roughly) what John said.
It really depends what sort of credits I've been charged with granting.
Replies from: Eliezer_Yudkowsky, Cyan↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-16T00:41:22.203Z · LW(p) · GW(p)
Agree. Being falsified means you're at least not "not even wrong" or "lacking in truth-condition", which is credit of a sort, but not really the sort we should be aspiring toward.
comment by anonym · 2009-09-01T15:20:27.037Z · LW(p) · GW(p)
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…
Eliezer S. Yudkowsky — “Staring into the Singularity”
Replies from: DanArmak↑ comment by DanArmak · 2009-09-14T09:05:46.449Z · LW(p) · GW(p)
There are no hard problems, only problems that are hard to a certain level of intelligence.
What reasons do we have for thinking this?
Some true theorems in math have no (finite length) proof. Some computations are proven to require bigger-than-universe resources to compute (and can't be compressed). Don't these qualify as ultimately hard problems?
Replies from: anonym↑ comment by anonym · 2009-09-15T03:25:51.227Z · LW(p) · GW(p)
Perhaps the key point is that for any potential "problem", there is some level of intelligence according to which the "problem" is utterly transparent in the sense that it is either obviously answerable, obviously unanswerable in principle, or obviously unanswerable in practice given a finite universe or other constraints. If there is a level of intelligence that effortlessly sees which of these is the case and solves it if it is solvable, then I don't think it makes sense to say the problem is intrinsically hard.
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-15T04:05:56.905Z · LW(p) · GW(p)
There are mathematical problems that are non-obviously not answerable. Meaning that in any finite universe, it is unanswerable whether the problem is answerable. (As opposed to in some fixed one).
You could assume the existence of infinite intelligences in infinite universes, but then you may run into infinite problems that are necessarily unsolvable.
So, I would agree with Dan that the quote is quite wrong, and hollow at best. (Hollow: if finite problems and infinite intelligences are assumed).
I am not surprised Eliezer marked that article "as wrong, obsolete, deprecated by an improved version, or just plain old".
Replies from: anonym↑ comment by anonym · 2009-09-15T05:24:54.492Z · LW(p) · GW(p)
Does your 2nd sentence hold recursively? And could you explain why that is the case?
I can easily imagine mathematical problems that are non-obviously not answerable for me or for human beings much smarter than I am, just as the Continuum Hypothesis was for human beings before Godel and Cohen, but I can also easily imagine an intelligence that would, for example, nearly instantly notice Godel's incompleteness theorems as obvious consequences of the Peano axioms.
That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.
Replies from: Christian_Szegedy, DanArmak↑ comment by Christian_Szegedy · 2009-09-15T06:12:47.953Z · LW(p) · GW(p)
Assume there is either a proof for, against or for independence for every statement within the Peano axiom system.
For every program P you could create a statement S(P)="does this program ever halt?". Now you could solve the halting problem: For given P, iterate over every proof and check whether it is a proof for either S(P), not S(P) or S(P) is independent. Once the machine finds a proof for not S(P) or S(P) is independent, it stops with false. If it proves S(P), it stops with true. It will necessarily stop according to your original assumption.
This works even if you would look for an arbitrarily long chain of "it is not provable that not provable that not provable .... that not provable that S", since every finite proof has to be checked for only a finite number of chains. (Since proof must be longer than the statement)
That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.
That is plausible. But as you mentioned, the original quote, catchy though, was different and unfortunately (unlike most philosophical statements) formally wrong.
Replies from: anonym, Johnicholas↑ comment by anonym · 2009-09-21T00:47:06.763Z · LW(p) · GW(p)
Thanks for the concrete example. I do still think though that there is something of value in the quote that may be worth salvaging, perhaps by restricting it to solvable problems of the sort that we care about most (such as scientific and mathematical advances) and emphasizing not that everything is obvious to some conceivable intelligence but that most of what is currently not obvious to us and that we perceive as difficult is not obvious and is perceived as difficult not because it is intrinsically so but because we are so limited.
The heart of the quote for me is that instead of hardness being a function of one argument (in Haskell notation):
difficulty :: Problem -> PositiveReal
it is a function of two arguments:
difficulty :: Mind -> Problem -> PositiveReal
And that most of the interesting real-world Problem instances that map to very large numbers for us (i.e. the difficult ones that we will eventually solve if we survive long enough) are problems that would be deemed obvious to sufficiently intelligent minds.
And that is a worthwhile insight, even if the function is not defined for all problems, even if there are solvable problems for which there are no physically possible minds that would yield "difficulty p m" (or "difficulty(p, m)" in Python syntax) being a small enough number to fall below the threshold of obviousness, and even if for any possible mind we can find solvable problems that have arbitrarily high difficulty.
↑ comment by Johnicholas · 2009-09-15T12:29:10.515Z · LW(p) · GW(p)
There's a flaw in your proof - the step from "The machine will necessarily stop" to "This solves the halting problem" is unjustified.
Despite the flaw, I agree with your general conclusion - there are and will be challenging as well as unsolvable problems.
Replies from: pengvado↑ comment by pengvado · 2009-09-15T20:10:31.428Z · LW(p) · GW(p)
It's justified. If a machine halts, then there's a proof of that in PA (simply the list of steps it performs before halting). Therefore, "independent of PA" implies "doesn't halt". Therefore, a provability decider is the same as a halting decider.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-15T23:31:13.301Z · LW(p) · GW(p)
With the added steps, yes, there is a proof.
↑ comment by DanArmak · 2009-09-16T20:14:39.931Z · LW(p) · GW(p)
Christian's argument shows there are always problems that can't be solved or shown to be unsolvable. But even if you ignore them and look at finitely solvable problems (or provable theorems in PA), there's no upper bound on the time or processing power needed to solve a randomly chosen problem. Not even for the "smartest" intelligence allowed by the laws of physics (i.e. the one ideally optimized to solve the chosen problem).
Replies from: anonym↑ comment by anonym · 2009-09-21T00:56:32.315Z · LW(p) · GW(p)
Thanks for the clear explanation. My response below to Christian is relevant here also.
comment by thomblake · 2009-09-03T12:35:54.664Z · LW(p) · GW(p)
Replies from: SilasBartaIf making the "wrong" choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don't want to be empowered.
↑ comment by SilasBarta · 2009-10-12T20:13:14.326Z · LW(p) · GW(p)
Finally saw this. Thanks for the shout-out! :-)
comment by HughRistik · 2009-09-01T18:17:14.198Z · LW(p) · GW(p)
Replies from: RuneWe at the Church of Google believe the search engine Google is the closest humankind has ever come to directly experiencing an actual God (as typically defined). We believe there is much more evidence in favour of Google's divinity than there is for the divinity of other more traditional gods.
We reject supernatural gods on the notion they are not scientifically provable. Thus, Googlists believe Google should rightfully be given the title of "God", as She exhibits a great many of the characteristics traditionally associated with such Deities in a scientifically provable manner.
↑ comment by Rune · 2009-09-01T20:42:24.271Z · LW(p) · GW(p)
How is this an LW/OB quote?
Replies from: HughRistik, HughRistik↑ comment by HughRistik · 2009-09-01T22:05:50.101Z · LW(p) · GW(p)
Oops, I thought this was the other thread. Moved.
↑ comment by HughRistik · 2009-09-01T21:56:21.612Z · LW(p) · GW(p)
Thanks for asking, rather than merely down-voting.
Since this work is an obvious parody which does not believe that Google is God, the implicature is that religious reasoning fails to follow through with its own criteria for the existence of deities.
This is rationalist commentary, similar to the Church of the Flying Spaghetti Monster, which presents beliefs as plausible as major religions, based on the kind of reasoning they involve. Similarly, the Church of Google presents beliefs that are even more plausible than actual religious beliefs according to religion's very own criteria.
Of course, we've all heard this all before. The rationality content of the Church of Google is run-of-the-mill pro-science, pro-rationality, religious skepticism. What made me quote it was how wittily it was presented. Here's some more:
Evidence of Google's existence is abundant. There is more evidence for the existence of Google than any other God worshiped today. Extraordinary claims require extraordinary evidence.
Does everyone get it now, and can I have my karma back? Did anyone get it one the first read?