Rationality Quotes: November 2010
post by jaimeastorga2000 · 2010-11-02T20:41:33.804Z · LW · GW · Legacy · 367 commentsContents
367 comments
A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
367 comments
Comments sorted by top scores.
comment by anonym · 2010-11-03T06:30:42.910Z · LW(p) · GW(p)
If you are in a shipwreck and all the boats are gone, a piano top … that comes along makes a fortuitous life preserver. But this is not to say that the best way to design a life preserver is in the form of a piano top. I think that we are clinging to a great many piano tops in accepting yesterday’s fortuitous contrivings.
Buckminster Fuller
comment by Tesseract · 2010-11-05T20:34:18.614Z · LW(p) · GW(p)
Kołakowski's Law, or The Law of the Infinite Cornucopia:
For any given doctrine that one wants to believe, there is never a shortage of arguments by which to support it.
Leszek Kołakowski
Replies from: Drawbacks, wedrifid↑ comment by wedrifid · 2010-11-05T20:39:55.461Z · LW(p) · GW(p)
I like it.
Replies from: stochastic↑ comment by stochastic · 2010-11-05T22:26:49.881Z · LW(p) · GW(p)
+1
comment by PeterS · 2010-11-03T05:22:17.134Z · LW(p) · GW(p)
Rule I
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
To this purpose the philosophers say that Nature does nothing in vain, and more is in vain when less will serve; for Nature is pleased with simplicity, and affects not the pomp of superfluous causes.
Rule II
Therefore to the same natural effects we must, as far as possible, assign the same causes.
As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.
Rule III
The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which is wont to be simple, and always consonant to itself. . .
Rule IV
In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.
This rule we must follow, that the argument of induction may not be evaded by hypotheses.
Isaac Newton, Philosophiae naturalis: Rules of Reasoning in Philosophy
comment by DSimon · 2010-11-04T20:06:19.303Z · LW(p) · GW(p)
Man, I'm amazing! I'm a machine that turns FOOD into IDEAS!
-- T-Rex, Dinosaur Comics #539
Replies from: ciantic↑ comment by ciantic · 2010-11-07T20:18:19.776Z · LW(p) · GW(p)
I know this is well known, but to supplement the T-Rex:
A mathematician is a device for turning coffee into theorems.
-Alfréd Rényi/Paul Erdős
Replies from: JoshuaZ, SilasBarta↑ comment by SilasBarta · 2010-11-23T23:03:52.018Z · LW(p) · GW(p)
And a cat is a device for turning kibble into cuddle.
comment by Richard_Kennaway · 2010-11-02T21:43:19.270Z · LW(p) · GW(p)
The fact that I have no remedy for all the sorrows of the world is no reason for my accepting yours. It simply supports the strong probability that yours is a fake.
H.L. Mencken, Minority Report.
Replies from: None, wedrifid↑ comment by [deleted] · 2010-11-02T23:31:11.227Z · LW(p) · GW(p)
I wrote about this.
The idea is: I can criticize a plan that claims wonderful successes, even if I have no corresponding plan of my own. Maybe we don't know how to get wonderful successes at all. Maybe they're impossible. Maybe your reasoning is suspect.
↑ comment by wedrifid · 2010-11-02T21:48:54.918Z · LW(p) · GW(p)
I am not sure I get it.
Replies from: sketerpot↑ comment by sketerpot · 2010-11-02T22:16:27.025Z · LW(p) · GW(p)
A more direct paraphrasing would be, Just because I don't have all the answers doesn't mean that your answers are correct.
A concrete example: just because scientists don't currently know everything about how evolution happened, that doesn't mean that Young Earth Creationists are right. Typical YEC debating strategy is to look for gaps (real or imagined) in our current theories, and act as if that proves that God created the world in six days, and from the dust created every creeping thing that creepeth upon the earth, etc.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-11-02T22:43:00.610Z · LW(p) · GW(p)
No, it speaks of remedy. It's not about beliefs about the world, but about courses of action, and there he's dead wrong - a course of action can only be bad by comparison to a better alternative.
Replies from: David_Gerard, Vladimir_M, PhilGoetz, BillyOblivion↑ comment by David_Gerard · 2010-11-03T09:49:24.150Z · LW(p) · GW(p)
"We must do something. This is something. Therefore, we must do this." is a fallacy. (The Politician's Syllogism.) Mencken's statement pretty clearly includes the course of action of not taking action; he's stating that any action is not necessarily better than no action, and that taking on any belief is not necessarily better than holding no belief.
↑ comment by Vladimir_M · 2010-11-02T22:55:21.143Z · LW(p) · GW(p)
I don't think either of you are getting it right. I'm not familiar with the context of this particular quote, but knowing it's from Mencken, he's clearly referring to various idealistic busybodies and their grand (and typically disastrously unsound) plans to solve the world's problems. The quote is directed against idealists who assume moral high ground and scoff at those who question their designs.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-11-02T23:05:59.654Z · LW(p) · GW(p)
Ah, so it's about whether a plan meets some absolute standard, rather than which plan is best, and the moral is that just because I don't know of a plan that meets standard X is no reason to think your plan will - in fact the reverse.
Replies from: AlanCrowe, Vladimir_M↑ comment by AlanCrowe · 2010-11-02T23:29:02.100Z · LW(p) · GW(p)
I think the absolute standard in question is the status quo. Will the proposed remedy make things worse? Mencken has no remedy of his own. In the first sentence he denies that this lack is evidence in favour of the proposition that somebody else's remedy will be an improvement on leaving things alone.
↑ comment by Vladimir_M · 2010-11-02T23:26:40.019Z · LW(p) · GW(p)
Basically, yes. For instance, the alcohol prohibitionists of Mencken's day were a prime example of the sort of people he targeted with this quote.
↑ comment by BillyOblivion · 2010-11-03T13:58:10.409Z · LW(p) · GW(p)
But sometimes that better alternative is "let's wait and see". And that's what many people aren't willing to do.
comment by MBlume · 2010-11-11T02:12:02.784Z · LW(p) · GW(p)
When I was halfway through my Ph.D. I formulated a hypothesis: The proximate challenge that keeps you from graduating is that you have to write a thesis. But the ultimate challenge to getting your Ph.D. is this: You somehow have to learn to understand, deep down, that all your romantic notions about the Ph.D. are bunk, that you will be exactly the same person on the day after you get it that you were the day before, and that you need to stop waiting for the day when you feel like a god and just write something down and get on with life.
It may take you years to accept this, and it may drive you to drink, but after you get to that point you can graduate.
Only then will you be able to live with the fact that your thesis looks like crap to you. Your thesis will always look like crap to you. Either you will have figured out absolutely everything and your thesis will look incredibly boring to you, because you've moved on, or -- vastly more likely -- your thesis will look woefully incomplete because, geez, there is so much that you couldn't figure out, and you're just so stupid!
Or, most likely of all, you will think both of these things at the same time.
Similarly: Being the world's foremost expert on a particular scientific problem is a lot less exciting in real life than it seems in the movies. In fact, being on the frontier of science feels like being totally, hopelessly lost and confused. Why this came as a surprise to me I'll never know.
--mechanical_fish on Hacker News. Emphasis mine. source
Replies from: Documentcomment by jaimeastorga2000 · 2010-11-02T20:49:16.409Z · LW(p) · GW(p)
From desert cliff and mountaintop we trace the wide design,
Strike-slip fault and overthrust and syn and anticline...
We gaze upon creation where erosion makes it known,
And count the countless aeons in the banding of the stone.
Odd, long-vanished creatures and their tracks & shells are found;
Where truth has left its sketches on the slate below the ground.
The patient stone can speak, if we but listen when it talks.
Humans wrote the Bible; God wrote the rocks.
There are those who name the stars, who watch the sky by night,
Seeking out the darkest place, to better see the light.
Long ago, when torture broke the remnant of his will,
Galileo recanted, but the Earth is moving still.
High above the mountaintops, where only distance bars,
The truth has left its footprints in the dust between the stars.
We may watch and study or may shudder and deny,
Humans wrote the Bible; God wrote the sky.
By stem and root and branch we trace, by feather, fang and fur,
How the living things that are descend from things that were.
The moss, the kelp, the zebrafish, the very mice and flies,
These tiny, humble, wordless things--how shall they tell us lies?
We are kin to beasts; no other answer can we bring.
The truth has left its fingerprints on every living thing.
Remember, should you have to choose between them in the strife,
Humans wrote the Bible; God wrote life.
And we who listen to the stars, or walk the dusty grade,
Or break the very atoms down to see how they are made,
Or study cells, or living things, seek truth with open hand.
The profoundest act of worship is to try to understand.
Deep in flower and in flesh, in star and soil and seed,
The truth has left its living word for anyone to read.
So turn and look where best you think the story is unfurled.
Humans wrote the Bible; God wrote the world.
~Catherine Faber, The Word of God
Replies from: Jayson_Virissimo, magfrump, Mass_Driver, Tiiba, simplicio, Apprentice↑ comment by Jayson_Virissimo · 2010-11-03T18:18:41.494Z · LW(p) · GW(p)
Long ago, when torture broke the remnant of his will, Galileo recanted, but the Earth is moving still.
What evidence is there that Galileo was tortured?
Replies from: Eliezer_Yudkowsky, Tyrrell_McAllister↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-04T18:20:10.760Z · LW(p) · GW(p)
So far as I know, he wasn't, just placed under house arrest. It jumped out at me too; you really have to get these poems exactly right on a factual level or it takes a lot away.
Replies from: Kaj_Sotala, Tyrrell_McAllister, Jayson_Virissimo↑ comment by Kaj_Sotala · 2010-11-04T18:38:48.652Z · LW(p) · GW(p)
The modern conception of Galileo as someone harshly prosecuted for his beliefs seems rather exaggarated: in reality, he was even explicitly encouraged to write a book on the subject by the church. It was only when he offended the Pope in his book that he got sent to house arrest.
In the end, Cardinal Bellarmine, acting on directives from the Inquisition, delivered him an order not to "hold or defend" the idea that the Earth moves and the Sun stands still at the centre. The decree did not prevent Galileo from discussing heliocentrism hypothesis (thus maintaining a facade of separation between science and that church). For the next several years Galileo stayed well away from the controversy. He revived his project of writing a book on the subject, encouraged by the election of Cardinal Maffeo Barberini as Pope Urban VIII in 1623. Barberini was a friend and admirer of Galileo, and had opposed the condemnation of Galileo in 1616. The book, Dialogue Concerning the Two Chief World Systems, was published in 1632, with formal authorization from the Inquisition and papal permission. [...]
Earlier, Pope Urban VIII had personally asked Galileo to give arguments for and against heliocentrism in the book, and to be careful not to advocate heliocentrism. He made another request, that his own views on the matter be included in Galileo's book. Only the latter of those requests was fulfilled by Galileo. Whether unknowingly or deliberately, Simplicio, the defender of the Aristotelian Geocentric view in Dialogue Concerning the Two Chief World Systems, was often caught in his own errors and sometimes came across as a fool. Indeed, although Galileo states in the preface of his book that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio" in Italian also has the connotation of "simpleton."[48] This portrayal of Simplicio made Dialogue Concerning the Two Chief World Systems appear as an advocacy book: an attack on Aristotelian geocentrism and defence of the Copernican theory. Unfortunately for his relationship with the Pope, Galileo put the words of Urban VIII into the mouth of Simplicio. Most historians agree Galileo did not act out of malice and felt blindsided by the reaction to his book.[49] However, the Pope did not take the suspected public ridicule lightly, nor the Copernican advocacy. Galileo had alienated one of his biggest and most powerful supporters, the Pope, and was called to Rome to defend his writings.
↑ comment by Tyrrell_McAllister · 2010-11-04T20:00:44.425Z · LW(p) · GW(p)
So far as I know, he wasn't, just placed under house arrest.
According to Owen Gingerich's The Great Copernicus Chase, the 1633 decree calling Galileo to be interrogated* read, in part, as follows:
Galileo Galilei ... is to be interrogated concerning the accusation, even threatened with torture, and if he sustains it, proceeding to an abjuration of the vehement [suspicion of heresy] before the full Congregation of the Holy Office, sentenced to imprisonment....
(Emphasis added.) Gingerich goes on to say:
On the next page the results of the interrogation are recorded. In Italian are Galileo's words: 'I do not hold and have not held this opinion of Copernicus since the command was intimated to me that I must abandon it.' Then he was again told to speak the truth under the threat of torture. He responded: 'I am here to submit, and I have not held this opinion since the decision was pronounced, as I have stated.' Finally, there is a notation that nothing further could be done, and this time the document is properly signed in Galileo's hand. Galileo was sent back to his house at Arcetri, outside Florence, where he remained under house arrest until his death in 1642.
(Emphasis added.) These quotes can be seen using Amazon's "Look Inside" feature. This link worked for me. These passages are also excerpted in this pdf.
So, Galileo was explicitly threatened with torture, though he was not actually tortured and may not even have been "shown the instruments of torture" (which is the strongest claim made in reputable sources). As I argue in this thread, I believe that this justifies saying that the Church used torture (as an institutionalized practice) to force Galileo to recant.
* An earlier version of this comment referred here to "the 1633 sentence entered against Galileo" because I misread Gingerich's use of the word "sentence" to refer to a sentence of punishment, but he just meant a grammatical sentence ><.
↑ comment by Jayson_Virissimo · 2010-11-04T18:30:21.526Z · LW(p) · GW(p)
I got burned during a debate because I trusted the history from my physics textbook. After having read several books on the history of science (rather than summaries inside larger works) I am convinced that the Dark Arts on on full display even in natural science coursework.
↑ comment by Tyrrell_McAllister · 2010-11-04T01:51:40.545Z · LW(p) · GW(p)
What evidence is there that Galileo was tortured?
A gun can be used to commit a crime even if it isn't fired.
Replies from: steven0461, steven0461↑ comment by steven0461 · 2010-11-04T02:36:06.048Z · LW(p) · GW(p)
"Torture" here is analogous to "shooting", not "crime".
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-11-04T02:55:00.585Z · LW(p) · GW(p)
I was analogizing "torture" with "gun", not "crime" or "shooting". Torture was a tool that the church had on hand and was prepared to use, and Galileo's knowledge of their threat to use torture was what led him to recant. (It was the forcing of his recanting that was the "crime" in my analogy.)
It might be more precise to say that what the church had on hand was an institutionalized practice of torture, but using "torture" to refer to the practice (rather than a particular act) seems within the bounds of accuracy in poetry.
Replies from: Emile↑ comment by Emile · 2010-11-04T18:55:47.332Z · LW(p) · GW(p)
That's a bit contrived - imagine if a presidential candidate mentions how his will was broken by torture in Vietnam, and afterward it's revealed that all that happened was that he was told he might be tortured, so he spilled the beans immediately. I wouldn't expect his poll numbers to go up.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-11-04T19:13:38.526Z · LW(p) · GW(p)
imagine if a presidential candidate mentions how his will was broken by torture in Vietnam, and afterward it's revealed that all that happened was that he was told he might be tortured, so he spilled the beans immediately. I wouldn't expect his poll numbers to go up.
I would still say that torture was used to break his will. To say this would be accurate, if not precise (because I'm not specifying whether I mean a particular act or an institutionalized practice). Whether his will proved too easy to break to satisfy the electorate is another matter.
↑ comment by steven0461 · 2010-11-04T02:34:27.206Z · LW(p) · GW(p)
What about a shooting? Can a gun be used to commit a shooting even if it isn't fired?
↑ comment by Mass_Driver · 2010-11-03T07:07:04.998Z · LW(p) · GW(p)
Who is Catherine Faber? Has she made anything public about herself other than this wonderful poem? Google and Wikipedia were not immediately helpful.
Replies from: Emile↑ comment by Emile · 2010-11-03T08:56:00.213Z · LW(p) · GW(p)
From her website:
This song was inspired when a friend of mine complained to me about a run-in with some Creationists, and asked "what can you say to such people?" The first words that popped out of my mouth were "humans wrote the bible. God wrote the rocks."
From her bio:
Cat Faber is the offspring of a sasquatch and a space alien, which gave her a unique perspective on things like sports and religion (if those can be said to be separate subjects). Her taste in music is likewise unusual, combining a love for the folksong style with an interest in subjects like science and magic. This made her such a natural for filk that it is astonishing she didn't discover it until she was nearly full grown. She sang from babyhood, though her sasquatch parent maintains she was tone-deaf until about the sixth grade. In 1996 she hooked up with Arlene Hills to form the filk duo Echo's Children.
↑ comment by Tiiba · 2010-11-03T15:03:59.832Z · LW(p) · GW(p)
I want to upvote this twice.
Replies from: byrnema↑ comment by byrnema · 2010-11-04T01:08:47.631Z · LW(p) · GW(p)
This comment being upvoted +21 doesn't fit my model of LessWrong voting, because it personifies the natural world with a God-concept, even if it is advocating for science and evolution. Am I missing something?
Replies from: Perplexed, Unnamed, Alicorn, jaimeastorga2000↑ comment by Perplexed · 2010-11-04T03:01:46.756Z · LW(p) · GW(p)
So should every every metaphor be voted down? Or just personifying metaphors? Or just metaphors mentioning deities?
I downvoted it because it perpetuated the myth that Galileo was tortured. Plus, God knows, the poetry was pretty awful.
Replies from: Risto_Saarelma, NancyLebovitz, Tiiba↑ comment by Risto_Saarelma · 2010-11-04T03:44:13.637Z · LW(p) · GW(p)
So should every every metaphor be voted down? Or just personifying metaphors? Or just metaphors mentioning deities?
I figure this particular one strikes some as a bit iffy since the metaphor is so close to the salient metaphor the actual creationists are using and treating as a non-metaphor. Metaphors, like "God wrote life", closely associated with unsympathetic real-world groups tend to carry a bit extra baggage. The matter is of course confused further by the original context where this was written as a response to creationists.
↑ comment by NancyLebovitz · 2010-11-04T17:49:50.943Z · LW(p) · GW(p)
What details have you got about Galileo? I'd heard that he was shown the instruments of torture, and recanted at that point.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-04T20:36:35.096Z · LW(p) · GW(p)
Well, there is some dispute whether he was "shown the instruments". A historian named Gingerich apparently argues that the showing never took place. But, in any case, threat of torture is not torture - or at least it is not what comes to mind when the myth of torture is repeated. The myth is a falsehood, which, if repeated by someone who knows better, is usually referred to as a "lie".
Replies from: gwern↑ comment by gwern · 2010-11-07T13:41:03.986Z · LW(p) · GW(p)
But, in any case, threat of torture is not torture - or at least it is not what comes to mind when the myth of torture is repeated.
Sounds like mock executions - they're not actually being executed...
↑ comment by Unnamed · 2010-11-04T01:47:12.591Z · LW(p) · GW(p)
It beautifully promotes Joy in the Merely Real, and strongly encourages the pursuit of knowledge.
↑ comment by jaimeastorga2000 · 2010-11-06T19:01:13.704Z · LW(p) · GW(p)
I suspect it may be something similar to what NihilCredo said; rationalist quotes from theist sources are just so much fun.
↑ comment by Apprentice · 2010-11-03T23:57:00.084Z · LW(p) · GW(p)
If you take the Christian Bible and put it out in the wind and the rain, soon the paper on which the words are printed will disintegrate and the words will be gone. Our bible IS the wind and the rain.
-- Something Wiccans like to say. Google gives conflicting advice on the original source.
Replies from: steven0461, Desrtopa, Apprentice↑ comment by steven0461 · 2010-11-04T02:19:10.676Z · LW(p) · GW(p)
It seems LW has now sunk to the level of "my holy book can beat up your holy book".
Replies from: sketerpot↑ comment by Apprentice · 2010-11-04T07:57:42.405Z · LW(p) · GW(p)
While I don't particularly mind this being downvoted and would normally have expected it to be, I am slightly confused why this pantheistic anti-Bible quote is being downvoted while the pantheistic anti-Bible quote I posted it in reply to is being upvoted so much.
Replies from: ArisKatsaris, NancyLebovitz, mwaser, Jack, Risto_Saarelma↑ comment by ArisKatsaris · 2010-11-04T15:09:14.268Z · LW(p) · GW(p)
Besides those differences already mentioned by others: The parent quote talks about the continuous discovery of knowledge, yours talks about the obliteration of knowledge ("the words will be gone"), as if the fact that a text can be deleted proves it wrong.
↑ comment by NancyLebovitz · 2010-11-04T12:04:42.315Z · LW(p) · GW(p)
-2 isn't a whole lot.
However, I think your quote is an unfair comparison. Christianity is not identically equal to physical bibles. Wiccans put a mythic overlay on the wind and the rain.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2010-11-04T12:27:42.710Z · LW(p) · GW(p)
Well, there is the idea that if you'd wipe out all memory of Christianity, it'd never come back, but if you'd wipe out all memory of direct natural phenomena like wind and rain, people would rediscover them pretty quickly.
Replies from: Eliezer_Yudkowsky, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-04T18:32:20.389Z · LW(p) · GW(p)
But they wouldn't rediscover the mythic overlay, which is what makes the original quote a lie and an attempt to steal credit.
Replies from: cousin_it↑ comment by cousin_it · 2010-11-04T18:43:35.689Z · LW(p) · GW(p)
There seems to be an interesting factual question lurking here: how much of the mythic overlay would people reinvent in a similar form, even if they forgot all their language and culture? A quick search turned up the amazing Wikipedia page List of thunder gods. Of course, the major monotheistic religions are also very similar to each other (I'd say about as close as C# was to Java when it first appeared), but they didn't arise in ignorance of each other, as pagan mythologies did.
Replies from: wedrifid, Risto_Saarelma↑ comment by Risto_Saarelma · 2010-11-05T05:49:44.743Z · LW(p) · GW(p)
I was thinking the same. My understanding is that neopaganism is more about the general process with which people come up with mythic significance for natural phenomena than any specific pagan myth. There certainly seems to be a case for humans doing that spontaneously in a state of nature, though it's hard to tell exactly how wide the variation would be.
The closest the human universals list has are "belief in supernatural/religion" and "weather control (attempts to)". So everyone ends up trying to magic up nature into doing stuff, but they're not necessarily reverent about it like the neopagans would like?
↑ comment by wedrifid · 2010-11-04T18:42:02.668Z · LW(p) · GW(p)
Christianity would not come back. Not with that name and not with those details.
Science would not come back either, not with that name and not with those details. It would actually be fascinating to see how we built up our understanding a second time around. Much of how we carve reality into human sized pieces is an artifact of how it was discovered as well as mere chance. Rediscovering the mechanisms behind natural phenomena may well produce systems of knowledge that take considerable effort to understand.
Replies from: Snowyowl↑ comment by Snowyowl · 2010-11-05T11:26:46.428Z · LW(p) · GW(p)
I think that human sized pieces will always be human sized pieces. Important discoveries may be made in a different order, but if we turned back the clock I'm pretty sure we'd rediscover fire, positional numeral systems (though not necessarily base 10), metallurgy, and electromagnetism, assuming humanity doesn't go extinct too fast. On the other hand, achievements like space travel and the nuclear bomb depended heavily on the geopolitics of the time, and I wouldn't expect them to be replicated.
Replies from: wedrifid↑ comment by mwaser · 2010-11-04T12:18:39.387Z · LW(p) · GW(p)
I can think of several reasons
- Your post appears to be a dominance game. Your bible will obliterate their bible.
- While beauty is in the eye of the beholder, I would guess that the initial quote probably strikes many here as elegant poetry that is well worth sharing (and upvotes effectively equal sharing).
- Your post isn't particularly interesting so I would guess that it wouldn't attract any upovotes and point 1 means that it is nearly certain to attract at least two or three downvotes.
↑ comment by Jack · 2010-11-04T08:56:21.983Z · LW(p) · GW(p)
NMDV, but I have no idea what your quote is supposed to mean.
Replies from: Apprentice↑ comment by Apprentice · 2010-11-04T14:05:26.221Z · LW(p) · GW(p)
It ties into several pagan themes; this-worldliness, nature-worship, immanence, pantheism, anti-dogmatism and the continuity and durability of these ideas.
Replies from: Jack↑ comment by Jack · 2010-11-04T15:01:38.274Z · LW(p) · GW(p)
Okay. Well people here tend to like this-worldliness and anti-dogmatism but tend to dislike nature-worship, 'immanence' and pantheism. So that pretty much explains the downvotes.
Compare to the first one which is a poem about how science is way cooler than religion. It's like rationalist catnip. I wouldn't take it personally.
↑ comment by Risto_Saarelma · 2010-11-04T09:50:01.065Z · LW(p) · GW(p)
Dunno either. I liked yours.
Maybe people are associating the Wiccan connection with New Agey woo and the straight-up anti-rationalism it sometimes turns into.
comment by [deleted] · 2010-11-03T05:21:09.815Z · LW(p) · GW(p)
Getting caught up in style and throwing away victory is something for the lower ranks to do. Captains can't even think about doing such a carefree thing. Don't try to be a good guy. It doesn't matter who owes who. From the instant they enter into a war, both sides are evil.
- Shunsui Kyōraku Bleach
Related to: Politics, Protection
comment by aausch · 2010-11-04T03:17:11.396Z · LW(p) · GW(p)
"The best thing for being sad," replied Merlin, beginning to puff and blow, "is to learn something. That's the only thing that never fails. You may grow old and trembling in your anatomies, you may lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honour trampled in the sewers of baser minds. There is only one thing for it then — to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by, never fear or distrust, and never dream of regretting. Learning is the only thing for you. Look what a lot of things there are to learn."
— T.H. White (The Once and Future King)
Replies from: soreff↑ comment by soreff · 2010-11-04T22:57:24.240Z · LW(p) · GW(p)
There are exceptions... When a child first learns that he or she is mortal, I doubt that that is a happy day for him or her. Truths are valuable, but some are rather bitter.
Replies from: avalot↑ comment by avalot · 2010-11-06T21:47:25.883Z · LW(p) · GW(p)
Yes, and I think this is the one big crucial exception... That is the one bit of knowledge that is truly evil. The one datum that is unbearable torture on the mind.
In that sense, one could define an adult mind as a normal (child) mind poisoned by the knowledge-of-death toxin. The older the mind, the more extensive the damage.
Most of us might see it more as a catalyst than a poison, but I think that's insanity justifying itself. We're all walking around in a state of deep existential panic, and that makes us weaker than children.
Replies from: rwallace, AlanCrowe↑ comment by rwallace · 2010-11-07T16:39:31.515Z · LW(p) · GW(p)
Well, it's not the knowledge of death that's evil, it's the actual phenomenon -- there's not much point blaming the messenger for the bad news. Especially not now we're at the stage where we're beginning to have a chance to do something about it.
↑ comment by AlanCrowe · 2010-11-06T22:24:54.411Z · LW(p) · GW(p)
Ernest Becker agrees with you, but I always read the one star reviews first.
For myself, I've lost touch with Becker's ontology. I'm reduced to making the lame suggestion of playing Go in tournaments in order to practice managing a limited stock of time, such as 70 years.
comment by jaimeastorga2000 · 2010-11-02T20:42:37.842Z · LW(p) · GW(p)
For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it. The greatest minds, as they are capable of the highest excellences, are open likewise to the greatest aberrations; and those who travel very slowly may yet make far greater progress, provided they keep always to the straight road, than those who, while they run, forsake it.
~René Descartes, Discourse on the Method
comment by MichaelGR · 2010-11-04T21:10:40.611Z · LW(p) · GW(p)
It is a profound and necessary truth that the deep things in science are not found because they are useful, they are found because it was possible to find them. -J. Robert Oppenheimer.
Replies from: bentarm, wedrifid↑ comment by bentarm · 2010-11-04T22:56:05.875Z · LW(p) · GW(p)
There are two quite different interpretations of this quote: it either says something about scientists, or something about scientific truths, and I'm not sure which is the intention.
The two messages I see are:
Scientists just enjoy seeking truths, you don't need to give them the incentive of practical applications in order for them to do science, so any truths that can be discovered will be, regardless of their usefulness.
There are an awful lot of true things. The ones that we know might not be the most useful, but they are the ones that happen to lie in the (extremely small?) subset of true things that humans are capable of understanding.
To an extent, I guess both of these are true... which one was Oppenheimer aiming at?
Replies from: Perplexed, stochastic↑ comment by Perplexed · 2010-11-05T22:54:44.040Z · LW(p) · GW(p)
[one interpretation of Oppenheimer:] There are an awful lot of true things. The ones that we know might not be the most useful, but they are the ones that happen to lie in the (extremely small?) subset of true things that humans are capable of understanding.
Quibble: Two things you might have missed:
- Oppenheimer was talking about "deep things in science", not about "truths."
- He said "possible to find them", not "possible to understand them".
↑ comment by stochastic · 2010-11-05T22:35:07.962Z · LW(p) · GW(p)
There are an awful lot of true things.I think that many of the things that are commonly regarded as being "true" are socially constructed fictions, biases and fallacies. Moreover science can never attain absolute truth it can only strive for it.
Replies from: orthonormal, Emile↑ comment by orthonormal · 2010-11-09T00:38:46.764Z · LW(p) · GW(p)
Hi stochastic, and welcome to Less Wrong!
This is actually a really important topic. I agree that there are a lot of cultural and normative claims that don't deserve to be called "true" or "false", despite their common usage as such. I'd be cautious of using the phrase "absolute truth", since it conjures up false expectations compared to the actual process of increasing confidence in models of the world.
Really relevant: The Simple Truth
P.S. Introduce yourself on the welcome page when you have a moment!
comment by Perplexed · 2010-11-03T02:54:38.801Z · LW(p) · GW(p)
David Hume was right to predict that superstition would survive for hundreds of years after his death, but how could he have anticipated that his own work would inspire Kant to invent a whole new package of superstitions? Or that the incoherent system of Marx would move vast populations to engineer their own ruin? Or that the infantile rantings of the author of Mein Kampf would be capable of bringing the whole world to war?
Perhaps we will one day succeed in immunizing our societies against such bouts of collective idiocy by establishing a social contract in which each child is systematically instructed in Humean skepticism. Such a new Emile would learn about the psychological weaknesses to which Homo sapiens is prey, and so would understand the wisdom of treating all authorities - political leaders and social role-models, academics and teachers, philosophers and prophets, poets and pop stars - as so many potential rogues and knoves, each out to exploit the universal human hunger for social status. He would therefore appreciate the necessity of doing all of his own thinking for himself. He would understand why and when to trust his neighbors. Above all, he would waste no time yearning for utopias that are incompatible with human nature.
-- Ken Binmore, in Natural Justice, p56
Replies from: MichaelVassar, PhilGoetz↑ comment by MichaelVassar · 2010-11-03T23:23:13.618Z · LW(p) · GW(p)
Science works by scientists not doing all their thinking for themselves. That's also how it fails. Getting the balance right may be hard, but no-one has really tried very hard, so it may not be. Trying to do that is largely what I see SIAI as being about.
Replies from: Perplexed, simplyeric↑ comment by Perplexed · 2010-11-04T00:10:43.246Z · LW(p) · GW(p)
Hmmm. A mathematician learning a new field thinks for himself, up to a point. Oh, he gets his ideas, theorems, and even proofs from the book, but he is supposed to verify the thinking for himself.
The same kind of thing applies to scientists. They get ideas, formulas, and even empirical data from other scientists, but they are supposed to verify the inferences and even some of the derivations themselves. At least in their own field. A neuroscientist using FMRI doesn't need to know the fine points of the portions of QED dealing with particle spins in a varying magnetic field. Nor the computer science involved in the image processing. But he does appreciate that these tools, whether he understands them in detail himself or not, are not based on tradition or authority, but instead draw their legitimacy from the work of his colleagues in those fields who definitely do think for themselves.
If the balance you seek to strike is the balance that lets you distinguish path-breaking innovation from crackpottery, I would suggest this: It is ok to try doing something that the experts think is impossible if you really understand why they are so pessimistic and you think you might understand why they are wrong.
↑ comment by simplyeric · 2010-11-04T17:35:09.212Z · LW(p) · GW(p)
I'm not sure that's true. The issue isn't what a person "thinks"...it's what a person ultimately concludes. A scientist must think for itself in order to hypothesize, no?
I think science goes wrong when scientists conclude for themselves, in the face of the actul facts on the matter.
I think what is being referenced above is how to separate information from who said it, and how.
↑ comment by PhilGoetz · 2010-11-03T03:31:14.539Z · LW(p) · GW(p)
I like the sentiment, but - instructed in Humean skepticism? Isn't that going overboard in the opposite direction?
Replies from: Perplexed↑ comment by Perplexed · 2010-11-03T04:37:49.137Z · LW(p) · GW(p)
Binmore is on something of a "Hume is God, Kant is Satan" kick in this book. Another quote I like deals with Binmore's efforts to comprehend the "categorical imperative":
It eventually dawned on me that I was reading the work of an emperor who was clothed in nothing more than the obscurity of his prose.
I share much of Binmore's enthusiasm for Hume. I don't think that rationalists have much reason to dislike Hume's skepticism. Hume was a practical man, and his famous argument against induction is far from a counsel of epistemological despair. As for instructing the young to be skeptical of gods - well it may violate the US Constitution, but then so does gun control. ;)
Nonetheless, I suspect that many people here would not care much for this particular quote in its full context - starting a couple paragraphs before my quote and continuing a paragraph further.
comment by XiXiDu · 2010-11-04T12:37:19.768Z · LW(p) · GW(p)
This is a bit long for a rationality quote and isn't really a quote but short enough and worth the read: The most poetic and convincing argument for striving for posthumanity (via aleph.se).
Replies from: Pavitracomment by MichaelGR · 2010-11-04T21:09:50.305Z · LW(p) · GW(p)
It is still an unending source of surprise for me how a few scribbles on a blackboard or on a piece of paper can change the course of human affairs. -Stanislaw Ulam
Replies from: wedrifid↑ comment by wedrifid · 2010-11-04T21:55:52.734Z · LW(p) · GW(p)
Can they really? I have my doubts. Most of those scribbles on a blackboard were either an inevitable result of outside forces or would have been made on a different blackboard had they they not been made there. (Although to be fair the butterfly and mere chance will play their part at least some of the time.)
Replies from: Perplexed, PhilGoetz, MichaelGR↑ comment by Perplexed · 2010-11-04T22:23:06.912Z · LW(p) · GW(p)
Scribbles on maps, particularly in 1815 and 1919, had some largish effects.
Replies from: DanArmak, gwern↑ comment by DanArmak · 2010-11-04T22:53:54.929Z · LW(p) · GW(p)
In 1923, England and France divided between them the previously Turkish territories of what are modern Syria, Lebanon and Israel/Palestine. They drew a pencil line on a map to mark the treaty border.
It turned out that the thickness of the pencil line itself was several hundred meters on the ground. In 1964, Israel fought a battle with Syria over that land.
People were killed because someone neglected to sharpen their pencil. That's "scribbles on a piece of paper" for you.
Ref: a book found by Google. I originally learned about this from an Israeli plaque at the Dan River preserve near the border.
Replies from: wedrifid↑ comment by wedrifid · 2010-11-04T23:02:26.287Z · LW(p) · GW(p)
People were killed because someone neglected to sharpen their pencil. That's "scribbles on a piece of paper" for you.
I suppose it would be in bad taste to find that rather amusing. Or at least to admit it.
Replies from: James_K, Drawbacks↑ comment by Drawbacks · 2010-11-23T22:23:09.074Z · LW(p) · GW(p)
"The 350-mile detour in the Trans-Siberian Railway was caused by the Tsar, who drew the proposed route using a ruler with a notch in it." -- Not 1982
Replies from: Pfft↑ comment by Pfft · 2010-12-19T00:48:27.806Z · LW(p) · GW(p)
What's the source for this? Googling "Not 1982" is not helpful... I did find the following amusing quote though:
His engineers were once consulting [Tsar Nicholas] as to the expediency of taking the line from St Petersburg to Moscow by a slight detour to avoid some very troublesome obstacles. The Tsar took up a ruler and with his pencil drew a straight line from the old metropolis. Handing back the chart he peremptorily said "There, gentlemen, that is to be the route for the line!"
"The Trans-Siberian Railway". In The Living Age, seventh series volume five, 1899
Replies from: Manfred, gwern↑ comment by Manfred · 2010-12-19T00:59:17.877Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/Not_the_Nine_O%27Clock_News#Books_and_miscellaneous
My google-fu is strong-ish. Still, not a particularly reliable source.
↑ comment by gwern · 2010-12-19T00:59:19.357Z · LW(p) · GW(p)
I wonder if Nicholas was acting in the same spirit as King Canute and likewise has been subsequently misinterpreted. (I've seen the Canute story mentioned as an example of being power-mad.) Nicholas's intention could have been something like 'Gentlemen, you were chosen for your competence in engineering and expertise in dealing with such details; I have made my general wish known to you; kindly implement it and do not bother me with what is your job.'
↑ comment by PhilGoetz · 2010-11-10T23:28:01.322Z · LW(p) · GW(p)
He could have also been thinking about the Declaration of Independence, the Declaration of the Rights of Man, and various other documents. (I'd list the Magna Carta, but it didn't really have the effect it's credited with. It was a few lines in a larger document that was more concerned with the hunting privileges of nobles than with the rights of man, and that was nullified before the year was out.)
↑ comment by MichaelGR · 2010-11-06T20:12:03.518Z · LW(p) · GW(p)
I think he had things like the development of physics in the 20th century that led to the creation of the A and H bombs. I got the quote from Richard Rhodes history of the making of the atomic bomb.
It doesn't matter exactly which blackboard or wrote wrote what, in the end, a bunch of people making calculations and experiments changed the course of human affairs pretty significantly.
comment by AlanCrowe · 2010-11-02T22:17:29.498Z · LW(p) · GW(p)
If, instead of welcoming inquiry and criticism, the admirers of a great author accept his writings as authoritative, both in their excellences and in their defects, the most serious injury is done to truth. In matters of philosophy and science, authority has ever been the great opponent of truth. A despotic calm is usually the triumph of error. In the republic of the sciences, sedition and even anarchy are beneficial in the long run to the greatest happiness of the greatest number.
William Stanley Jevons, Theory of Political Economy, 1871: p.275-6
comment by NihilCredo · 2010-11-11T05:17:59.311Z · LW(p) · GW(p)
“But for that matter, how do you explain the fact that the statues of Easter Island are megaliths exactly like the Celtic ones? Or that a Polynesian god called Ya is clearly the Yod of the Jews, as is the ancient Hungarian Io-v’, the great and good god? Or that an ancient Mexican manuscript shows the Earth as a square surrounded by sea, and in its center is a pyramid that has on its base the inscription Aztlan, which is close to Atlas and Atlantis? Why are pyramids found on both sides of the Atlantic?”
“Because it’s easier to build pyramids than spheres. Because the wind produces dunes in the shape of pyramids and not in the shape of the Parthenon.”
“I hate the spirit of the Enlightenment.”
Umberto Eco, Foucault's Pendulum
comment by Zetetic · 2010-11-04T21:00:38.616Z · LW(p) · GW(p)
Many a man has cherished for years as his hobby some vague shadow of an idea, too meaningless to be positively false; he has, nevertheless, passionately loved it, has made it his companion by day and by night, and has given to it his strength and his life, leaving all other occupations for its sake, and in short has lived with it and for it, until it has become, as it were, flesh of his flesh and bone of his bone; and then he has waked up some bright morning to find it gone, clean vanished away like the beautiful Melusina of the fable, and the essence of his life gone with it. I have myself known such a man; and who can tell how many histories of circle-squarers, metaphysicians, astrologers, and what not, may not be told in the old German story?
Charles Sanders Peirce
Replies from: Perplexedcomment by Richard_Kennaway · 2010-11-02T21:33:42.467Z · LW(p) · GW(p)
A book is a machine to think with.
I. A. Richards, "Principles of Literary Criticism"
Replies from: satt↑ comment by satt · 2010-11-02T22:39:19.260Z · LW(p) · GW(p)
Reminiscent of Umberto Eco describing the novel as "a machine for generating interpretations".
Replies from: sketerpot↑ comment by sketerpot · 2010-11-04T22:43:29.103Z · LW(p) · GW(p)
Take that idea far enough and you get something like Haibane Renmei, where there is no official interpretation -- everybody has to generate their own idea of what the show's premise is. This was frustrating the first time I watched it, since I didn't know that there wasn't going to be an explanation for everything. The second time, though, I absolutely loved it.
comment by Snowyowl · 2010-11-17T13:46:46.389Z · LW(p) · GW(p)
If it's a stupid idea and it works, then it isn't stupid.
-- French Ninja, Freefall
Puts me in mind of "Rationalists should win".
Replies from: Document↑ comment by Document · 2011-07-08T18:47:27.722Z · LW(p) · GW(p)
Or alternatively, there's something intelligent that works much better.
-- benelliott (edited to attribute)
comment by MichaelGR · 2010-11-06T18:48:52.307Z · LW(p) · GW(p)
If you can't tell whose side someone is on, they are not on yours. -Warren E. Buffett
Replies from: xamdam, Document, Dre↑ comment by Dre · 2010-11-06T21:18:05.392Z · LW(p) · GW(p)
Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?
Replies from: orthonormal↑ comment by orthonormal · 2010-11-09T00:41:10.108Z · LW(p) · GW(p)
Only if it's common knowledge that both players are human.
ETA: Since I got downvoted, maybe I wasn't being clear. I think that the Warren Buffett quote applies to human psychology more than to game theory in general. If outright deception were easy, it would probably become a good strategy to keep your allies in some doubt about your intentions, as a bargaining chip. But we humans don't seem to be good at pulling that off, and so ambivalence is a strong signal of opposition.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T01:43:45.678Z · LW(p) · GW(p)
Now that you have clarified, I wish I could downvote a second time.
Tit-for-tat is a good strategy in the iterated prisoner's dilemma regardless of whether the players are human and regardless of whether the other player is "on your side". In fact, it is pretty much taken for granted that there are no sides in the PD. Dre was downvoted by me for a complete misunderstanding of how Tit-for-tat relates to "sides". You were downvoted for continuing the confusion.
Replies from: orthonormal↑ comment by orthonormal · 2010-11-09T01:51:56.538Z · LW(p) · GW(p)
Oh, you're right- my response would have made sense talking about players in a one-shot PD with communication beforehand, but it's a non sequitur to Dre's mistaken comment. Don't know how I missed that.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T05:12:21.708Z · LW(p) · GW(p)
Upvoted, but even with communication beforehand, the rational move in a one-shot PD is to defect. Unless there is some way to make binding commitments, or unless there is some kind of weird acausal influence connecting the players. Regardless of whether the other player is human and rational, or silicon and dumb as a rock.
Replies from: Vladimir_Nesov, orthonormal, JoshuaZ↑ comment by Vladimir_Nesov · 2010-11-14T10:51:52.287Z · LW(p) · GW(p)
Upvoted, but even with communication beforehand, the rational move in a one-shot PD is to defect.
Taboo "rational".
Unless there is some way to make binding commitments, or unless there is some kind of weird acausal influence connecting the players.
Acausal control is not something additional, it's structure that already exists in a system if you know where to look for it. And typically, it's everywhere, to some extent.
Replies from: shokwave↑ comment by shokwave · 2010-11-14T13:07:36.045Z · LW(p) · GW(p)
Taboo "rational".
Highest-scoring move, adjective applied to the course that maximises fulfillment of desires.
The best move in a one-shot PD is to defect against a cooperator.
With no communication or precommitment, and with the knowledge that it is a one-shot PD, the overwhelming outcome is both defect. Adding communication to the mix creates a non-zero chance you can convince your opponent to cooperate - which increases the utility of defecting.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-14T14:03:41.197Z · LW(p) · GW(p)
Adding communication to the mix creates a non-zero chance you can convince your opponent to cooperate - which increases the utility of defecting.
There is a question of what will actually happen, but also more relevant questions of what will happen if you do X, for various values of X. If you convince the opponent to cooperate, it's one thing, not related to the case of convincing your opponent to cooperate if you cooperate.
Replies from: shokwave↑ comment by shokwave · 2010-11-14T14:48:57.741Z · LW(p) · GW(p)
the case of convincing your opponent to cooperate if you cooperate.
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
Is that a legitimate strategy, or am I misunderstanding what you mean by convincing your opponent to cooperate if you cooperate?
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-14T15:21:32.654Z · LW(p) · GW(p)
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
Couldn't parse.
↑ comment by Vladimir_Nesov · 2010-11-14T15:03:40.177Z · LW(p) · GW(p)
[W]hat [do] you mean by convincing your opponent to cooperate if you cooperate?
It's not in general possible to predict what you'll actually do, since if it were possible, you could take such predictions into consideration in deciding what to do, in particular you could decide differently as a result, invalidating the "prediction". Similarly, it's not in general possible to predict what will actually happen, without assuming what you'll decide first. It's better to ask, what is likely to happen if you decide X, than to ask just what is likely to happen. It's more useful too, since it gives you information about (acausal) consequences of your actions that can be used as basis for making decisions.
In the case of Prisoner's Dilemma, it's not very helpful to ask, what will your opponent do. What your opponent will do generally depends on what you'll do, and assuming that it doesn't is a mistake that leads to the classical conclusion that defecting is always the better option (falsified by the case of identical players that always make the same decision, with cooperation the better one). If you ask instead, what will your opponent do (1) if you cooperate, and (2) if you defect, that can sometimes give you interesting answers, such that cooperating suddenly becomes the better option. When you talk to the opponent with the intention of "convincing" them, again you are affecting both predictions about what they'll do, on both sides of your possible decision, and not just the monolithic prediction of what they'll do unconditionally. In particular, you might want to influence the probability of your opponent cooperating with you if you cooperate, without similarly affecting the probability of your opponent cooperating with you if you defect. If you affect both probabilities in the same way, then you are correct, such influence makes the decision of defecting more profitable than before. But if you affect these probabilities to a different degree, then it might turn out that the opposite is true, that the influence in question makes cooperating more profitable.
Replies from: shokwave↑ comment by shokwave · 2010-11-14T15:58:03.061Z · LW(p) · GW(p)
Ah, I see! I have been butting my head against various ideas that lead to cooperating in one-shot PDs and the like and not making any progress, it was because while I had the idea of splitting my actions into groups conditional on the opponent's action, I didn't have the concept of doing the same for my opponent.
With that in mind, I can no longer parse my previous comment either. I think I meant that I would increase their probability of cooperating if I cooperated, and have them increase my probability of cooperating if they cooperated (thus decreasing both of our probabilities of defecting if the other cooperates), and then when the probabilities have moved far enough to tell us both to cooperate, I would defect, knowing that I would score a defect-against-cooperate. But yeah, it doesn't make any sense at all, because the probabilities tell us both to cooperate.
Thanks for taking the time to explain this concept to me.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-14T16:25:13.789Z · LW(p) · GW(p)
(Note that probability of you making a given decision is not knowable, when you are considering it yourself while allowing this consideration to influence the decision.)
↑ comment by orthonormal · 2010-11-09T17:23:54.641Z · LW(p) · GW(p)
Perplexed, have you come across the decision theory posts here yet? You'll find them pretty interesting, I think.
LW Wiki for the Prisoner's Dilemma
LW Wiki for timeless decision theory (start with the posts- Eliezer's PDF is very long and spends more time justifying than explaining).
Essentially, this may be beyond the level of humans to implement, but there are decision theories for an AI which do strictly better than the usual causal decision theory, without being exploitable. Two of these would cooperate with each other on the PD, given a chance to communicate beforehand.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T20:09:38.127Z · LW(p) · GW(p)
Perplexed, have you come across the decision theory posts here yet? You'll find them pretty interesting, I think.
Yes, I have read them, and commented on them. Negatively, for the most part. If any of these ideas are ever published in the peer reviewed literature, I will be both surprised and eager to read more.
there are decision theories for an AI which do strictly better than the usual causal decision theory, without being exploitable. Two of these would cooperate with each other on the PD, given a chance to communicate beforehand.
I think that you may have been misled by marketing hype. Even the proponents of those theories admit that they do not do strictly better (or at least as good) on all problems. They do better on some problems, and worse on others. Furthermore, sharing source code only provides a guarantee that the observed source is current if that source code cannot be changed. In other words, an AI that uses this technique to achieve commitment has also forsaken (at least temporarily) the option of learning from experience.
I am intrigued by the analogy between these acausal decision theories and the analysis of Hamilton's rule in evolutionary biology. Nevertheless, I am completely mystified as to the motivation that the SIAI has for pursuing these topics. If the objective is to get two AIs to cooperate with each other there are a plethora of ways to do that already well known in the game theory canon. An exchange of hostages, for example, is one obvious way to achieve mutual enforceable commitment. Why is there this fascination with the bizarre here? Why so little reference to the existing literature?
Replies from: WrongBot, JGWeissman↑ comment by WrongBot · 2010-11-09T21:24:37.183Z · LW(p) · GW(p)
So far as I understand the situation, the SIAI is working on decision theory because they want to be able to create an AI that can be guaranteed not to modify its own decision function.
There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit's Hitchhiker). If this happens (they believe), it will present a risk of goal-distortion, which is unFriendly.
Put another way: the objective isn't to get two AIs to cooperate, the objective is to make it so that an AI won't need to alter its decision function in order to cooperate with another AI. (Or any other theoretical bargaining partner.)
Does that make any sense? As a disclaimer, I definitely do not understand the issues here as well as the SIAI folks working on them.
Replies from: orthonormal, Perplexed↑ comment by orthonormal · 2010-11-09T21:43:10.882Z · LW(p) · GW(p)
I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
As far as I understand SIAI, the idea is that decision theory is the basis of their proposed AI architecture, and they think it's more promising than other AGI approaches and better suited to Friendliness content.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T21:50:46.006Z · LW(p) · GW(p)
I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
That sounds intriguing also. Again, a reference to something written by someone who understands it better might be helpful so as to make some sense of it.
Replies from: cousin_it, orthonormal↑ comment by cousin_it · 2010-11-09T23:48:12.604Z · LW(p) · GW(p)
Maybe it would be helpful to you to think of self-modifications and alternative decision theories as unrestricted precommitment. If you had the ability to irrevocably precommit to following any decision rule in the future, which rule would you choose? Surely it wouldn't be pure CDT, because you can tractably identify situations where CDT loses.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T00:34:45.002Z · LW(p) · GW(p)
you can tractably identify situations where CDT loses.
"Tractably" is a word that I find a bit unexpected in this context. What do you mean by it?
"Situations where CDT loses." Are we talking about real-world-ish situations here? Situations in which causality applies? Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past? What kind of situations do you have in mind?
And what do you mean by "loses"? Loses to who or what? Loses to agents that can foresee their opponent's plays? Agents that have access to information channels not available to the CDT agent? Just what information channels are allowed? Why those, and not others?
ETA: And that "Surely it wouldn't be CDT ... because you can identify ..." construction simply begs for completion with "Surely it would be ... because you can't identify ...". Do you have a candidate? Do you have a proof of "you can't identify situations where it loses". If not, what grounds do you have for criticizing?
Replies from: None, WrongBot, cousin_it↑ comment by [deleted] · 2010-11-10T02:31:30.312Z · LW(p) · GW(p)
CDT still loses to TDT in Newcomb's problem if Omega has can predict your actions with better than 50.05% chances. You can't get out of this by claiming that Omega has access to unrealistic information channels, because these chances seem fairly realistic to me.
↑ comment by WrongBot · 2010-11-10T02:12:08.751Z · LW(p) · GW(p)
Situations in which the agents are free rather than being agents whose decisions have already been made for them by a programmer at some time in the past?
Free from what? Causality? This sounds distressingly like you are relying on some notion of "free will".
(Apologies if I'm misreading you.)
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T02:53:07.602Z · LW(p) · GW(p)
I am relying on a notion of free will.
I understand that every normative decision theory adopts the assumption (convenient fiction if you prefer) that the agent being advised is acting of "his own free will". Otherwise, why bother advising?
Being a compatibilist, as I understand Holy Scripture (i.e. The Sequences) instructs me to be, I see no incompatibility between this "fiction" of free will and the similar fiction of determinism. They model reality at different levels.
For certain purposes, it is convenient to model myself and other "free agents" as totally free in our decisions, but not completely free in carrying out those decisions. For example, my free will ego may decide to quit smoking, but my determined id has some probability of overruling that decision.
Replies from: WrongBot, nshepperd↑ comment by WrongBot · 2010-11-10T03:12:05.547Z · LW(p) · GW(p)
Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent's programmer; I suppose I'm wondering where you draw the line.
As a side note, I feel very uncomfortable seeing the sequences referred to as inviolable scripture, even in jest. In my head, it just screams "oh my god how could anyone ever be doing it this wrong arghhhhhh."
I'm still trying to figure out what I think of that reaction, and do not mention it as a criticism. I think.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T04:29:38.586Z · LW(p) · GW(p)
Why the distinction between agents which are free and agents which have had their decisions made for them by a programmer, then? Are you talking about cases in which specific circumstances have hard-coded behavioral responses? Every decision every agent makes is ultimately made for it by the agent's programmer; I suppose I'm wondering where you draw the line.
I make the distinction because the distinction is important. The programmer makes decisions at one point in time, with his own goals and/or utility functions, and his own knowledge of the world. The agent makes decisions at a different point in time, based on different values and different knowledge of the world. A decision theory which advises the programmer is not superior to a decision theory which advises the agent. Those two decision theories are playing different games.
↑ comment by nshepperd · 2010-11-10T03:14:28.319Z · LW(p) · GW(p)
"Totally free" sounds like too free. You're not free to actually decide at time T to "decide X at time T+1" and then actually decide Y at time T+1, since that is against the laws of physics.
It's my understanding that what goes through your head when you actually decide X at time T+1 is (approximately) what we call TDT. Or you can stick to CDT and not be able to make decisions for your future self.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T04:03:34.955Z · LW(p) · GW(p)
I upvoted this because it seems to contain a grain of truth, but I'm nervous that someone before me had downvoted it. I don't know whether that was because it actually is just completely wrong about what TDT is all about, or because you went a bit over the top with "against the laws of physics".
↑ comment by cousin_it · 2010-11-10T00:38:25.470Z · LW(p) · GW(p)
Situations where CDT loses are precisely those situations where credible precommitment helps you, and inability to credibly precommit hurts you. There's no shortage of those in game theory.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T00:54:53.742Z · LW(p) · GW(p)
Ok, those are indeed a reasonable class of decisions to consider. Now, you say that CDT loses. Ok, loses to what? And presumably you don't mean loses to opponents of your preferred decision theory. You mean loses in the sense of doing less well in the same situation. Now, presumably that means that both CDT and your candidate are playing against the same game opponent, right?
I think you see where I am going here, though I can spell it out if you wish. In claiming the superiority of the other decision theory you are changing the game in an unfair way by opening a communication channel that didn't exist in the original game statement and which CDT has no way to make use of.
Replies from: cousin_it↑ comment by cousin_it · 2010-11-10T01:04:52.254Z · LW(p) · GW(p)
Well, yeah, kind of, that's one way to look at it. Reformulate the question like this: what would CDT do if that communication channel were available? What general precommitment for future situations would CDT adopt and publish? That's the question TDT people are trying to solve.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T01:18:11.166Z · LW(p) · GW(p)
what would CDT do if that communication channel were available?
The simplest answer that moves this conversation forward would be "It would pretend to be a TDT agent that keeps its commitments, whenever that act of deception is beneficial to it. It would keep accurate statistics on how often agents claiming to be TDT agents actually are TDT agents, and adjust its priors accordingly."
Now it is your turn to explain why this strategy violates the rules, whereas your invention of a deception-free channel did not.
↑ comment by orthonormal · 2010-11-09T23:18:59.874Z · LW(p) · GW(p)
I'm going to have to refer you to Eliezer's TDT document for that. (If you're OK with starting in medias res, the first mention of this is on pages 22-23, though there it's just specialized to Newcomb's Dilemmas; see pages 50-52 for an example of the limits of this hack. Elsewhere he's argued for the more general nature of the hack.)
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T00:00:48.596Z · LW(p) · GW(p)
Ok thanks.
I'm coming to realize just how much of this stuff derives from Eliezer's insistance on reflective consistency of a decision theory. Given any decision theory, Eliezer will find an Omega to overthrow it.
But doesn't a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega? Just as there is no enumeration of the reals, isn't there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.
Replies from: jimrandomh, NihilCredo↑ comment by jimrandomh · 2010-11-10T00:34:41.572Z · LW(p) · GW(p)
Just as there is no enumeration of the reals, isn't there a game which can make any specified rational agent regret its rationality? Omega holds all the cards. He can always make you regret your choice of decision theory.
No. We can ensure that no such problem exists if we assume that (1) only the output decisions are used, not any internals; and (2) every decision is made with access to the full problem statement.
Replies from: bentarm↑ comment by bentarm · 2010-11-10T01:33:53.942Z · LW(p) · GW(p)
I'm not entirely sure what "every decision is made with full access to the problem statement means", but I can't see how it can possibly get around the diagonalisation argument. Basically, Omega just says "I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"
As cousin_it pointed out last time I brought this up, Caspian made this argument in response to the very first post on the Counterfactual Mugging. I've yet to see anyone point out a flaw in it as an existence proof.
As far as I can see the only premise needed for this diagonalisation to work is that your decision theory doesn't agree with algorithm X on all possible decisions, so just make algorithm X "whatever happens, recite the Bible backwards 17 times".
Replies from: jimrandomh, Sniffnoy, NihilCredo↑ comment by jimrandomh · 2010-11-10T01:37:51.833Z · LW(p) · GW(p)
I'm not entirely sure what "every decision is made with full access to the problem statement means", but I can't see how it can possibly get around the diagonalisation argument. Basically, Omega just says "I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"
In that case, your answer to problem A is being used in a context other than problem A. That other context is the real problem statement, and you didn't have it when you chose your answer to A, so it violates the assumption.
↑ comment by Sniffnoy · 2010-11-10T01:37:27.179Z · LW(p) · GW(p)
Yeah, that definitely violates the "every decision is made with full access to the problem statement" condition. The outcome depends on your decision on problem A, but when making your decision on problem A you have no knowledge that your decision will also be used for this purpose.
Replies from: bentarm↑ comment by bentarm · 2010-11-10T01:58:43.735Z · LW(p) · GW(p)
I don't see how this is useful. Let's take a concrete example, let's have decision problem A, Omega offers you the choice of $1,000,000, or being slapped in the face with a wet fish. Which would you like your decision theory to choose?
Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of "fish-slapping"-No-megas.
My algorithm "always get slapped in the face with a wet fish where that's an option", does better than any sensible algorithm on this particular problem, and I don't see how this problem is noticeably less realistic than any others.
In other words, I guess I might be willing to believe that you can get around diagonalisation by posing some stringent limits on what sort of all-powerful Omegas you allow (can anyone point me to a proof of that?) but I don't see how it's interesting.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-11-10T02:09:04.318Z · LW(p) · GW(p)
Now, No-mega can simulate you, say, 10 minutes before you find out who he is, and give you 3^^^3 utilons iff you chose the fish-slapping. So your algorithm has to include some sort of prior on the existence of "fish-slapping" No-megas.
Actually, no, the probability of fish-slapping No-megas is part of the input given to the decision theory, not part of the decision theory itself. And since every decision theory problem statement comes with an implied claim that it contains all relevant information (a completely unavoidable simplifying assumption), this probability is set to zero.
Decision theory is not about determining what sorts of problems are plausible, it's about getting from a fully-specified problem description to an optimal answer. Your diagonalization argument requires that the problem not be fully specified in the first place.
↑ comment by NihilCredo · 2010-11-10T02:55:57.568Z · LW(p) · GW(p)
"I simulated your decision on problem A, on which your algorithm outputs something different from algorithm X, and give you a shiny black ferrari iff you made the same decision as algorithm X"
This is a no-choice scenario. If you say that the Bible-reciter is the one that will "win" here, you are using the verb "to win" with a different meaning from the one used when we say that a particular agent "wins" by making the choice that leads to the best outcome.
↑ comment by NihilCredo · 2010-11-10T00:46:56.934Z · LW(p) · GW(p)
But doesn't a diagonal argument show that no decision theory can be reflectively consistent over all test data presented by a malicious Omega?
With the strong disclaimer that I have no background in decision theory beyond casually reading LW...
I don't think so. The point of simulation (Omega) problems, to me, doesn't seem to be to judo your intelligence against yourself; rather, it is to "throw your DT off the scent", building weird connections between events (weird, but still vaguely possible, at least for AIs), that a particular DT isn't capable of spotting and taking into account.
My human, real-life decision theory can be summarised as "look at as many possible end-result worlds as I can, and at what actions will bring them into being; evaluate how much I like each of them; then figure out which actions are most efficient at leading to the best worlds". But that doesn't exactly fly when you're programming a computer, you need something that can be fully formalised, and that is where those strange Omega scenarios are useful, because your code must get it right "on autopilot", it cannot improvise a smarter approach on the spot - the formula is on paper, and if it can't solve a given problem, but another one can, it means that there is room for improvement.
In short, DT problems are just clever software debugging.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T01:08:01.296Z · LW(p) · GW(p)
I agreed with everything you said after "I don't think so". So I am left confused as to why you don't think so.
You analogize DT problems as test data used to determine whether we should accept or reject a decision theory. I am claiming that our requirements (i.e. "reflective consistency") are so unrealistic that we will always be able to find test data forcing us to reject. Why do you not think so?
Replies from: NihilCredo, Vladimir_Nesov↑ comment by NihilCredo · 2010-11-10T01:19:17.451Z · LW(p) · GW(p)
Because I suspect that there are only so many functionally different types of connections between events (at the very least, I see no hint that there must be infinitely many) and once you've found them all you will have the possibility of writing a DT that can't be led to corner itself into suboptimal outcomes due to blind spots.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T01:36:00.660Z · LW(p) · GW(p)
at the very least, I see no hint that there must be infinite ones
Am I correct in interpreting this as "infinitely many of them"? If so, I am curious as to what you mean by "functionally different types of connections between events". Could you provide an example of some "types of connections between events"? Functionally different ones to be sure.
Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?
Replies from: NihilCredo↑ comment by NihilCredo · 2010-11-10T02:17:51.686Z · LW(p) · GW(p)
Am I correct in interpreting this as "infinitely many of them"?
Oops, yes. Fixed.
If so, I am curious as to what you mean by "functionally different types of connections between events". Could you provide an example of some "types of connections between events"? Functionally different ones to be sure.
CDT can 'see' the classical, everyday causal connections that are marked in formulas with the symbol ">" (and I'd have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot 'see' the connection in Newcomb's problem between the agent's choice of boxes and the content of the opaque box (sometimes called 'retrocausality').
Presumably, the relevance must be your belief that decision theories differ in just how many of these different kinds of connections they handle correctly. Could you illustrate this by pointing out how the decision theory of your choice handles some types of connections, and why you have confidence that it does so correctly?
I don't have a favourite formal decision theory, because I am not sufficiently familiar with the underlying math and with the literature of discriminating scenarios to pick a horse. If you're talking about the human decision "theory" of mine I described above, it doesn't explicitly do that; the key hand-waving passage is "figure out which actions are most efficient at leading to the best worlds", meaning I'll use whatever knowledge I currently possess to estimate how big is the set of Everett branches where I do X and get A, compared to the set of those where I do X and get B. (For example, six months ago I hadn't heard of the concept of acausal connections and didn't account for them at all while plotting the likelihoods of possible futures, whereas now I do - at least technically; in practice, I think that between human agents they are a negligible factor. For another example, suppose that some years from now I became convinced that the complexity of human minds, and the variability between different ones, were much greater than I previously thought; then, given the formulation of Newcomb's problem where Omega isn't explicitly defined as a perfect simulator and all we know is that it has had a 100% success rate so far, I would suitably increase my estimation of the chances of Omega screwing up and making two-boxing profitable.)
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T03:38:25.629Z · LW(p) · GW(p)
CDT can 'see' the classical, everyday causal connections that are marked in formulas with the symbol ">" (and I'd have to spend several hours reading at least the Stanford Encyclopaedia before I could give you a confident definition of that), but it cannot 'see' the connection in Newcomb's problem between the agent's choice of boxes and the content of the opaque box (sometimes called 'retrocausality').
Ok, so if I understand you, there are only some finite number of valid kinds of connections between events and when we have all of them incorporated - when our decision theory can "see" each of them - we are then all done. We have the final, perfect decision theory (FPDT).
But what do you do then when someone - call him Yuri Geller - comes along and points out that we left out one important kind of connection: the "superspooky" connection. And then he provides some very impressive statistical evidence that this connection exists and sets up games in front of large (paying) audiences in which FPDT agents fail to WIN. He then proclaims the need for SSPDT.
Or, if you don't buy that, maybe you will prefer this one. Yuri Geller doesn't really exist. He is a thought experiment. Still the existence of even the possibility of superspooky connections proves that they really do exist and hence that we need to have SADT - Saint Anselm's Decision Theory.
Ok, I've allowed my sarcasm to get the better of me. But the question remains - how are you ever going to know that you have covered all possible kinds of connections between events?
Replies from: NihilCredo↑ comment by NihilCredo · 2010-11-10T04:26:09.057Z · LW(p) · GW(p)
But the question remains - how are you ever going to know that you have covered all possible kinds of connections between events?
You can't, I guess. Within an established mathematical model, it may be possible to prove that a list of possible configurations of event pairs {A, B} is exhaustive. But the model may always prove in need of expansion or refinement - whether because some element gets understood and modellised at a deeper level (eg the nature of 'free' will) or, more worryingly, because of paradigm shifts about physical reality (eg turns out we can time travel).
↑ comment by Vladimir_Nesov · 2010-11-10T01:18:27.571Z · LW(p) · GW(p)
Decision theories should usually be seen as normative, not descriptive. How "realistic" something is, is not very important, especially for thought experiments. Decision theory cashes out where you find a situation that can indeed be analyzed with it, and where you'll secure a better outcome by following theory's advice. For example, noticing acausal control has advantages in many real-world situations (Parfit's Hitchhiker variants). Eliezer's TDT paper discusses this towards the end of Part I.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T02:06:31.036Z · LW(p) · GW(p)
I believe you misinterpreted my "unrealistic requirements". A better choice of words would have been "unachievably stringent requirements". I wasn't complaining that Omega and the like are unrealistic. At least not here.
The version I have of Eliezer's TDT paper doesn't have a "Part I". It is dated "September 2010 and has 112 pages. Is there a better version available?
I don't understand your other comments. Or, perhaps more accurately, I don't understand what they were in response to.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-14T10:48:00.031Z · LW(p) · GW(p)
The version I have of Eliezer's TDT paper doesn't have a "Part I". It is dated "September 2010 and has 112 pages. Is there a better version available?
"Part I" is chapters 1-9. (This concept is referred to in the paper itself.)
↑ comment by Perplexed · 2010-11-09T21:48:24.782Z · LW(p) · GW(p)
There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit's Hitchhiker).
Does that make any sense?
Not to me. But a reference might repair that deficiency on my part.
Replies from: WrongBot↑ comment by WrongBot · 2010-11-09T22:01:02.325Z · LW(p) · GW(p)
See Eliezer's posts on Newcomb's Problem and regret of rationality and TDT problems he can't solve.
(Incidentally, I found those reference in about 30 seconds, starting from the LW Wiki page on Parfit's Hitchhiker.)
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T23:30:56.629Z · LW(p) · GW(p)
Ah! Thank you. I see now. The circumstance in which a CDT agent will self modify to use a different decision theory are that:
- The agent was programmed by Eliezer Yudkowsky and hence is just looking for an excuse to self-modify.
- The agent is provided with a prior leading it to be open to the possibility of omnicient, yet perverse agents bearing boxes full of money.
- The agent is supplied with (presumably faked) empirical data leading it to believe that all such omniscient agents reward one-boxers.
- Since the agent seeks reflective equilibrium (because programmed by aforesaid Yudkowsky), and since it knows that CDT requires two boxing, and since it has no reason to doubt that causality is important in this world, it makes exactly the change to its decision theory that seems appropriate. It continues to use CDT except on Newcomb problems, where it one boxes. That is, it self-modifies to use a different decision theory, which we can call CDTEONPWIOB.
Well, ok, though I wouldn't have said that these are cases where CDT agents do something weird. These are cases where EYDT agents do something weird.
I apologize if it seems that the target of my sarcasm is you WrongBot. It is not.
EY has deluded himself into thinking that reflective consistency is some kind of gold standard of cognitive stability. And then he uses reflective consistency as a lever by which completely fictitious data can uproot the fundamental algorithms of rationality. Which would be fine, except that he has apparently convinced a lot of smart people here that he knows what he is talking about. Even though he has published nothing on the topic. Even though other smart people like Robin tell him that he is trying to solve an already solved problem.
I would say more but ...
This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion: Bibliography Gibbard, A., and Harper, W. L. (1978), "Counterfactuals and Two Kinds of Expected Utility", in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory, vol. 1, Reidel, Dordrecht, pp. 125-162.
Replies from: Sniffnoy, timtyler, WrongBot↑ comment by Sniffnoy · 2010-11-10T01:04:21.326Z · LW(p) · GW(p)
Reflective consistency is not a "gold standard". It is a basic requirement. It should be easy to come up with terrible, perverse decision theories that are reflectively consistent (EY does so, sort of, in his TDT outline, though it's not exactly serious / thorough). The point is not that reflective consistency is a sign you're on the right track, but that a lack of it is a sign that something is really wrong, that your decision theory is perverse. If using your decision theory causes you to abandon that same decision theory, it can't have been a very good decision theory.
Consider it as being something like monotonicity in a voting system; it's a weak requirement for weeding out things that are clearly bad. (Well, perhaps not everyone would agree IRV is "clearly bad", but... it isn't even monotonic!) It just happens that in this case evidently nobody noticed before that this would be a good condition to satisfy and hence didn't try. :)
↑ comment by WrongBot · 2010-11-10T03:21:12.308Z · LW(p) · GW(p)
TDT gets better outcomes than CDT when faced with Newcomb's Problem, Parfit's Hitchhiker, and the True Prisoner's Dilemma.
When does CDT outperform TDT? If the answer is "never", as it currently seems to be, why wouldn't a CDT agent self-modify to use TDT?
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T03:57:00.193Z · LW(p) · GW(p)
why wouldn't a CDT agent self-modify to use TDT?
Because it can't find a write-up that explains how to use it?
Perhaps you can answer the questions that I asked here What play does TDT make in the game of Chicken? Can you point me to a description of TDT that would allow me to answer that question for myself?
Replies from: WrongBot↑ comment by WrongBot · 2010-11-10T04:26:29.117Z · LW(p) · GW(p)
Suppose I'm an agent implementing TDT. My decision in Chicken depends on how much I know about my opponent.
- If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
- Else, I act identically to a CDT agent.
As Eliezer says here, the one-sentence version of TDT is "Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation."
Replies from: Sniffnoy, Perplexed, Perplexed↑ comment by Sniffnoy · 2010-11-10T04:51:40.483Z · LW(p) · GW(p)
- If I know my opponent implements the same decision procedure I do (because I have access to its source code, say), and my opponent has this knowledge about me, I swerve. In this case, my opponent and I are in symmetrical positions and its choice is fully determined by mine; my choice is between payoffs of (0,0) and (-10,-10).
I'm not sure this is right. Isn't there a correlated equilibrium that does better?
Replies from: WrongBot, Sniffnoy↑ comment by WrongBot · 2010-11-10T05:21:29.185Z · LW(p) · GW(p)
I think we're looking at different payoff matrices. I was using the formulation of Chicken that rewards
# | ....C....|.....D.....
C | +0, +0 | -1,+1
D | +1, -1 | -10, -10
which doesn't have a correlated equilibrium that beats (C,C).
Using the payoff matrix Perplexed posted here, there is indeed a correlated equilibrium, which I believe the TDT agents would arrive at (given a source of randomness). My bad for not specifying the exact game I was talking about.
Replies from: Sniffnoy, Perplexed↑ comment by Perplexed · 2010-11-10T06:12:30.908Z · LW(p) · GW(p)
Two questions:
- Why do you believe the TDT agents would find the correlated equilibrium? Your previous statement and Eliezer quote suggested that a pair of TDT agents would always play symmetrically in a symmetric game. No "spontaneous symmetry breaking".
- Even without a shared random source, there is a Nash mixed equilibrium that is also better than symmetric cooperation. Do you believe TDT would play that if there were no shared random input?
↑ comment by WrongBot · 2010-11-10T08:40:21.800Z · LW(p) · GW(p)
In a symmetric game, TDT agents choose symmetric strategies. Without a source of randomness, this entails playing symmetrically as well.
I'm not sure why you're talking about shared random input. If both agents get the same input, they can both be expected to treat it in the same way and make the same decision, regardless of the input's source. Each agent needs an independent source of randomness in order to play the mixed equilibrium; if my strategy is to play C 30% of the time, I need to know whether this iteration is part of that 30%, which I can't do deterministically because my opponent is simulating me.
↑ comment by Sniffnoy · 2010-11-10T20:38:51.281Z · LW(p) · GW(p)
Yeah, I think any use of correlated equilibrium here is wrong - that requires a shared random source. I think in this case we just get symmetric strategies, i.e., it reduces to superrationality, where they each just get their own private random source.
↑ comment by Perplexed · 2010-11-10T15:33:33.139Z · LW(p) · GW(p)
I'm not sure why you're talking about shared random input.
Sorry if this was unclear. It was a reference to the correlated pair of random variables used in a correlated equilibrium. I was saying that even without such a correlated pair, you may presume the availability of independent random variables which would allow a Nash equilibrium - still better than symmetric play in this game.
↑ comment by Sniffnoy · 2010-11-10T20:15:29.441Z · LW(p) · GW(p)
Gah, wait. I feel dumb. Why would TDT find correlated equilibria? I think I had the "correlated equilibrium" concept confused. A correlated equilibrium would require a public random source, which two TDTers won't have.
Replies from: steven0461↑ comment by steven0461 · 2010-11-10T20:23:29.351Z · LW(p) · GW(p)
Digits of pi are kind of like a public random source.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2010-11-10T20:41:35.136Z · LW(p) · GW(p)
Ignoring the whole pi-is-not-known-to-be-normal thing, how do you determine which digit of pi to use when you can't actually communicate and you have no idea how many digits of pi the other player may already know?
Replies from: steven0461↑ comment by steven0461 · 2010-11-10T20:51:06.528Z · LW(p) · GW(p)
Same way you meet up in New York with someone you've never talked to: something like Schelling points. I'm not sure that answer works in practice.
↑ comment by Perplexed · 2010-11-10T04:51:45.904Z · LW(p) · GW(p)
Thank you. I hope you realize that you have provided an example of a game in which CDT does better than TDT. For example, in the game with the payoff matrix shown below, there is a mixed strategy Nash equilibrium which is better than the symmetric cooperative result.
Replies from: WrongBot| .C..|....D..
C | 3,3 | 2,7
D | 7,2 | 0,0
↑ comment by Perplexed · 2010-11-10T06:12:37.321Z · LW(p) · GW(p)
So TDT is different from CDT only in cases where the game is perfectly symmetric? If you are playing a game that is roughly the symmetric PD, except that one guy's payoffs are shifted by a tiny +epsilon, then they should both defect?
Replies from: WrongBot↑ comment by WrongBot · 2010-11-10T08:26:24.312Z · LW(p) · GW(p)
TDT is different from CDT whenever one needs to consider the interaction of multiple decisions made using the same TDT-based decision procedure. This applies both to competitions between agents, as in the case of Chicken, and to cases where an agent needs to make credible precommitments, as in Newcomb's Problem.
In the case of an almost-symmetric PD, the TDT agents should still cooperate. To change that, you'd have to make the PD asymmetrical enough that the agents were no longer evaluating their options in the same way. If a change is small enough that a CDT agent wouldn't change its strategy, TDT agents would also ignore it.
This doesn't strike me as the world's greatest explanation, but I can't think of a better way to formulate it. Please let me know if there's something that's still unclear.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-10T15:56:20.831Z · LW(p) · GW(p)
If a change is small enough that a CDT agent wouldn't change its strategy, TDT agents would also ignore it.
This strikes me as a bit bizarre. You test whether a warped PD is still close enough to symmetric by asking whether a CDT agent still defects in order to decide whether a TDT agent should still cooperate? Are you sure you are not just making up these rules as you go?
Please let me know if there's something that's still unclear.
Much is unclear and very little seems to be coherently written down. What amazes me is that there is so much confidence given to something no one can explain clearly. So far, the only stable thing in your description of TDT is that it is better than CDT.
↑ comment by JGWeissman · 2010-11-09T20:19:56.104Z · LW(p) · GW(p)
They do better on some problems, and worse on others.
Do you have an example of a problem on which CDT or EDT does better than TDT?
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T20:25:35.226Z · LW(p) · GW(p)
I have yet to see a description of TDT which allows me to calculate what TDT does on an arbitrary problem. But I do know that I have seen long lists from Eliezer of problems that TDT does not solve that he thinks it ought to be improved so as to solve.
Replies from: orthonormal, JGWeissman↑ comment by orthonormal · 2010-11-09T21:47:31.119Z · LW(p) · GW(p)
The world isn't sufficiently formalized for us to meet that standard for any decision theory (though we come closer with CDT and TDT than with EDT, in my opinion). However, cousin_it has a few recent posts on formalized situations where an agent of a more TDT (actually, UDT) type does strictly better than a CDT one in the same situation. I don't know of any formalization (or any fuzzy real-world situation) where the opposite is true.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T22:05:53.075Z · LW(p) · GW(p)
I apparently misled you by using that word "arbitrary". I'm not asking for solutions to soft problems that are difficult to formalize. Simply solutions to the standard kinds of games already formalized in game theory. For example, the game of Chicken). Can anyone point me to a description that tells me what play TDT would make in this game? Or what mixed strategy it would use? Both assuming and not assuming the reading of each other's code.
ETA: Slightly more interesting than the payoff matrix shown in the wikipedia article is the case when the payoff for a win is 2 units, with a loss still costing only -1. This means that in the iterated version, the negotiated solution would be to alternate wins. But we are interested in the one-shot case.
Can TDT find a correlated equilibrium? If not, which Nash equilibrium does it pick? Or does it always chicken out? Where can I learn this information?
↑ comment by JGWeissman · 2010-11-09T20:31:12.438Z · LW(p) · GW(p)
But I do know that I have seen long lists from Eliezer of problems that TDT does not solve that he thinks it ought to be improved so as to solve.
Since CDT and EDT don't solve those problems either, all this justifies saying is that TDT does better on some problems, and the same on others, not "worse on others".
Replies from: timtyler↑ comment by timtyler · 2010-11-11T22:57:42.413Z · LW(p) · GW(p)
For every possible decision theory, there is a "nemesis" environment - where it does extremely badly. That is no-free-lunch fall out.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-11T23:07:53.754Z · LW(p) · GW(p)
A "nemesis" environment that feeds misleading evidence to a decision theory's underlying epistimology does not indicate the sort of problem illustrated by an environment in which a decision theory does something stupid with true information.
Replies from: timtyler↑ comment by timtyler · 2010-11-12T00:03:23.730Z · LW(p) · GW(p)
What you asked for was a case where a decision theory did worse than its rivals.
However, that seems pretty trivial if it behaves differently from them - you just consider an appropriate pathological environment set up to punish that decision theory.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-12T00:24:10.212Z · LW(p) · GW(p)
What you asked for was a case where a decision theory did worse than its rivals.
Yes, in the context of Perplexed dismissing examples of TDT doing better than CDT because CDT was being stupid with true information.
↑ comment by JoshuaZ · 2010-11-09T05:19:20.058Z · LW(p) · GW(p)
Not necessarily. Various decision theories can come into play here. It depends precisely on what you mean by the prisoner's paradox. If you are playing a true one shot where you have no information about the entity in question then that might be true. But if you are playing a true one shot where you each before making the decision have each player have access to the other player's source code then defecting may not be the best solution. Some of the decision theory posts have discussed this. (Note that knowing each others' source code is not nearly as strong an assumption as it might seem since one common idea in game theory is to look at what game theory occurs when people know when the other players know your strategy. (I'm oversimplifying some technical details here. I don't fully understand all the issues. I'm not a game theorist. Add any other relevant disclaimers.))
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T05:45:57.400Z · LW(p) · GW(p)
No one on this thread has mentioned a "prisoner's paradox". We have been discussing the Prisoner's Dilemma, a well known and standard problem in game theory which involves two players who must decide without prior knowledge of the other player's decision.
A different problem in which neither player is actually making a decision, but instead is controlled by a deterministic algorithm, and in which both players, by looking at source, are able to know the other's decision in advance, is certainly an interesting puzzle to consider, but it has next to nothing in common with the Prisoner's Dilemma besides a payoff matrix.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-11-09T05:54:06.524Z · LW(p) · GW(p)
No one on this thread has mentioned a "prisoner's paradox". We have been discussing the Prisoner's Dilemma, a well known and standard problem in game theory which involves two players who must decide without prior knowledge of the other player's decision.
Prisoner's paradox is another term for the prisoner's dilemma. See for example this Wikipedia redirect. You may want to reread what I wrote in that light. (Although there's some weird bit of illusion of transparency going on here in that part of me has a lot of trouble understanding how someone wouldn't be able to tell from context that they were the same thing.)
A different problem in which neither player is actually making a decision, but instead is controlled by a deterministic algorithm, and in which both players, by looking at source, are able to know the other's decision in advance, is certainly an interesting puzzle to consider, but it has next to nothing in common with the Prisoner's Dilemma besides a payoff matrix.
No. The problem of what to do is actually closely related when one has systems which are able to understand each others source code. It is in fact related to the problem of iterating the problem.
In general, given no information, the problem still has relevant decision theoretic considerations.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-09T14:53:39.701Z · LW(p) · GW(p)
The problem of what to do is actually closely related when one has systems which are able to understand each others source code. It is in fact related to the problem of iterating the problem.
I'm curious why you assert this. Game theorists have a half dozen or so standard simple one-shot two person games which they use to illustrate principles. PD is one, matching pennies is another, Battle of the Sexes, Chicken, ... the list is not that long.
They also have a handful of standard ways of taking a simple one-shot game and turning it into something else - iteration is one possibility, but you can also add signaling, bargaining with commitment, bargaining without commitment but with a correlated shared signal, evolution of strategies to an ESS, etc. I suppose that sharing source code can be considered yet another of these basic game transformations.
Now we have the assertion that for one (PD is the only one?) of these games, one (iteration is the only one?) of these transformations is closely related to this new code-sharing transformation. Why is this assertion made? Is there some kind of mathematical structure to this claimed relationship? Some kind of proof? Surely there is more evidence for this claimed relationship than just pointing out that both transformations yield the same prescription - "cooperate" - when there are only two possible prescriptions to choose among.
Is the code-sharing version of Chicken also closely related to the iterated version? How about Battle of the Sexes?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-11-09T16:39:51.543Z · LW(p) · GW(p)
So I'm going to need to repeat my earlier disclaimer that I'm far from my area of expertise. But the basic idea is that iterating games gives you a probabilistic estimate for what the underlying code looks like (assuming some sort of nice distribution on potential source code such that in general simpler code is more likely than complicated code). Unfortunately, I don't know any details of this approach beyond its existence but it should apply to other games like Chicken also.
comment by XiXiDu · 2010-11-06T15:44:52.781Z · LW(p) · GW(p)
We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. But there are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions, and pass them on.
Richard Feynman
comment by HonoreDB · 2010-12-01T16:39:25.428Z · LW(p) · GW(p)
Farnsworth A: You people and your slight differences disgust me. I'm going home. Where's that blue box with our universe in it?
Farnsworth 1: Oh, you'd like to get back to your evil universe, wouldn't you? And destroy your box with our universe inside it.
Farnsworth A: Nonsense! I would never do such a thing unless you were already having been going to do that!
--Futurama
comment by David_Gerard · 2010-12-01T00:51:25.122Z · LW(p) · GW(p)
The important thing is not to shout at this point, Vimes told himself. Do not ... what do they call it ... go spare? Treat this as a learning exercise. Find out why the world is not as you thought it was. Assemble the facts, digest the information, consider the implications. Then go spare. But with precision.
- Terry Pratchett, Thud!
[I have had cause to apply this one recently. It particularly resonated to see it in the book just now.]
Replies from: Alicorn, gwern↑ comment by Alicorn · 2010-12-01T00:54:17.388Z · LW(p) · GW(p)
Do not what do they call it go spare?
This seems to be missing, at minimum, some punctuation.
Edit: Moot.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-01T01:14:06.488Z · LW(p) · GW(p)
Ellipses eaten by cut'n'paste. Fixed. Thank you :-)
comment by Richard_Kennaway · 2010-11-02T21:36:22.489Z · LW(p) · GW(p)
Beware of no man more than of yourself; we carry our worst enemies within us.
Charles H. Spurgeon
Replies from: sketerpot, simplyeric↑ comment by sketerpot · 2010-11-02T22:18:42.138Z · LW(p) · GW(p)
Not always. I know someone who narrowly avoided Auschwitz who would beg to differ; her worst enemies were definitely external.
Replies from: AlanCrowe, anonym, MichaelVassar↑ comment by AlanCrowe · 2010-11-03T11:22:17.642Z · LW(p) · GW(p)
Your comment raises a very delicate point and I'm not sure that I am tactful enough to make it clearly.
Zooming out to get a broader view so that we can notice what usually happens, rather than the memorable special case, we notice that most Germans were enthusiastic about Hitler, all the way from 1933 to 1941. It is hard to reconstruct the reasons why. Looking at the broad picture we get a clear sense of people being their own worst enemies, enthusiastically embracing a mad leader who will lead them to destruction.
The message that history is sending to Alan is: if you had been a young man in Germany in 1933 you would have idolized Hitler. There are two ways to respond to this sobering message. One is to picture myself as an innocent victim. There were plenty of innocent victims, so this is easily done, but it dodges the hard question. The other response is to embrace the LessWrong vision and to search for ways to avoid the disasters to which self-deception sentences Man.
Replies from: sketerpot, MartinB, roland↑ comment by sketerpot · 2010-11-03T20:36:16.151Z · LW(p) · GW(p)
You're right, and I think that the reason it's so hard to make that point tactfully is because of how scary it is. If we go down that line of thought honestly, we can imagine ourselves firing up the ovens, or dragging manacled people into the belly of a slave ship, and feeling good about it. This is not a comfortable idea.
But there's another, more hopeful side to this. As MartinB points out, it's possible to understand how such monstrous acts feel to the people committing them, and train yourself to avoid making the same mistakes. This is a problem we can actually attack, as long as we can accept that our own thoughts are fallible.
(On a lighter note: how many people here regularly catch themselves using fallacious logic, and quickly correct their own thoughts? I would hope that the answer is "everyone", or at least "almost everyone". If you do this, then it shows that you're already being significantly less wrong, and it should give a fair amount of protection against crazy murderous ideologies.)
↑ comment by MartinB · 2010-11-03T11:57:21.762Z · LW(p) · GW(p)
It is hard to reconstruct the reasons why.
I doubt that it is. You find similar idolizations of leaders in many places. The general principles can be understood, and I think are by now. For the special case of nazi-germany you have the added bonus of good documentation and easy availability of contemporary sources.
↑ comment by roland · 2010-11-07T22:03:12.593Z · LW(p) · GW(p)
The other response is to embrace the LessWrong vision and to search for ways to avoid the disasters to which self-deception sentences Man.
I'm a big fan of lesswrong yet I think it falls short because it lacks any concrete steps taken in the direction of being more rational. Just reading interesting posts won't make you a rationalist.
Replies from: simplicio↑ comment by simplicio · 2010-11-10T21:54:52.617Z · LW(p) · GW(p)
It's true that just reading posts won't make you more rational very fast. But thankfully, that is not the extent of LW - it is also encouraging people to respond to arguments they see, in a social context that rewards improving skills very highly. We are sort of practicing "virtue rationality" here, if you will.
Once you have truly assimilated the core ideas of LW, to the point where they're almost starting to feel like cliches, you simply cannot HELP but to apply them in everyday life.
For example, "notice when you're confused" saved my bacon recently: I was working on a group engineering project (in university) which was more or less done, but there was some niggling detail of interfacing that didn't sit well with me. I didn't know it was wrong; I just had a weird sensation of butterflies and fog every time I thought about that aspect. In the past I have responded to such situations with a shrug. This time, inspired by the above maxim, I decided to really investigate, at which point it became clear that our design had skipped a peripheral but essential component.
I can cite more personal examples if you like. The trouble with noticing such instances is that once a skill is truly digested, it doesn't have a little label that says "that skill came from LessWrong." It just feels like the obviously right thing to do.
↑ comment by anonym · 2010-11-03T06:21:09.086Z · LW(p) · GW(p)
Can't you say "not always" about pretty much any quote? They aren't meant to be taken as universal truths that apply to all people and all circumstances across all of time ;-).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-11-03T08:54:57.303Z · LW(p) · GW(p)
There's nothing worse than nitpicking a hyperbole.
↑ comment by MichaelVassar · 2010-11-03T23:26:00.056Z · LW(p) · GW(p)
True, but barely. For how long do you think she would have had to plan and execute fully rationally in order to prevent Auschwitz. I think that it would have been a lot of work, but not insanely much work if done honestly.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T00:37:08.846Z · LW(p) · GW(p)
?
Do you mean avoiding getting sent to Auschwitz, or preventing the Holocaust?
Escaping was something of a gamble. It probably wasn't obvious that fleeing to France wasn't good enough.
Replies from: MartinB, MichaelVassar↑ comment by MartinB · 2010-11-04T00:49:47.754Z · LW(p) · GW(p)
I guess when the guys that hate your guts get into power, is a good time to start packing. But after a decent time in the 20s, and lots of history, and many people of jewish decent being educated, and involved in the society, it is hard to see the signs. Jews have served in the 1. world war, and rightfully, and completely saw them self as Germans. Getting banned from professions came later, limits to who can marry whom and so on. It reminds me of the story of the frog that slowly gets heated up in water. Each step seems only a little worse than the other, so one thinks it might fade away.
One should also keep in mind that racism and sexism was more widely spread in these days. Jews were not particularly welcomed in the US or elsewhere.
The horror of Auschwitz was never announced. on each step there was talk of relocation. That includes the officials. No one imagined that a cultured people would be so barbaric.
For a fictional presentation on how to turn up the heat the original V miniseries is pretty good.
Replies from: Pavitra, NancyLebovitz↑ comment by Pavitra · 2010-11-04T19:14:20.663Z · LW(p) · GW(p)
I guess when the guys that hate your guts get into power, is a good time to start packing.
I'm a homosexual atheist living in the United States, and apparently people take the teabaggers seriously enough to vote for them. Should I move?
Considered under the categorical imperative, this strategy seems like it would lead to people clustering themselves into super-fanatical cliques, which strikes me as undesirable. In particular, it would become harder and harder for anyone to change their mind, and thus harder for human knowledge to progress.
Note also that, if the liberal Americans are the first to leave, the trigger-happy neocons get to keep control of the heavily nuclear-armed country.
Replies from: MartinB↑ comment by MartinB · 2010-11-04T20:04:36.960Z · LW(p) · GW(p)
Should I move?
I will tell you in hindsight.
The move or change decision is an interesting one. For German Jews it was obviously better to leave. I would guess that many dissidents in islamic countries are also better off being alive in exile. Edit: Formatting
↑ comment by NancyLebovitz · 2010-11-04T11:09:20.783Z · LW(p) · GW(p)
As I understand it, a good many German Jews had the amount of warning and the resources to get out. Polish Jews were caught more by surprise and (I think) were generally less well off, and most of the Holocaust happened there.
Perhaps we should have a discussion about making high-stakes urgent decisions under conditions of great uncertainty.
Replies from: MartinB, MartinB↑ comment by MartinB · 2010-11-04T11:19:57.848Z · LW(p) · GW(p)
I happen to be German, currently live in Nuremberg, and finally got around to visit Auschwitz last year. But i do not know the relation of people that flew and people that stayed. Fleeing also involed the ability to pay for the ticket. I probably read some about that, but forgot. It is true that the killings mostly happened in the east. But quite many were deported there just for this purpose.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T11:53:40.494Z · LW(p) · GW(p)
Wikipedia: Over 90% of Polish Jews were killed, and about 75% of German Jews.
Until I checked, I didn't realize that the proportion of German Jews who were killed was that high. I didn't have a specific number in mind, I think I was just giving more attention to the idea of those who'd escaped.
Replies from: MartinB, MartinB↑ comment by MartinB · 2010-11-04T12:32:33.535Z · LW(p) · GW(p)
Oh, and the difference between people that flew at a suitable time vs. the number of people who survived. The later includes people who were deported but not killed, so the former is even lower. If you haven't read it yet. I found the comic from Art Spiegelman called 'MAUS!' pretty intense and interesting.
↑ comment by MartinB · 2010-11-04T12:29:51.893Z · LW(p) · GW(p)
Yes. Not having been there limits imagination. Pre WW2 jews were as common as they are now in the US (or maybe more.) Now you will not find that many. All people of Jewish decent i know are not from Germany.
In the last year I stumbled over genocides. This being the most unexpected evil I found..
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T12:49:01.464Z · LW(p) · GW(p)
This is the one that surprised me.
Replies from: MartinB↑ comment by MartinB · 2010-11-04T20:05:43.807Z · LW(p) · GW(p)
Why?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T20:16:22.901Z · LW(p) · GW(p)
Failure to think about the British record in Ireland, perhaps. Not thinking about the mere size of India, so that if things go wrong, huge numbers of people die.
So why did the Canadian atrocities against First Peoples surprise you?
Replies from: MartinB↑ comment by MartinB · 2010-11-04T20:38:16.523Z · LW(p) · GW(p)
I am not so much surprised with the atrocities against natives. Those are common. It is a bit surprising to learn it about Canada, because the country has a good reputation and its history is not that well known. But:
Overcrowding, poor sanitation, and a lack of medical care led to high rates of tuberculosis, and death rates of up to 69 percent.
Those were schools! Schools with a death rate is so much against anything that I consider a school to be about. Its just wrong. It just looks like a relabeled death camp, and that defeats the point of education. That amount of ignorance is just mindblowing.
↑ comment by MartinB · 2010-11-04T11:21:46.370Z · LW(p) · GW(p)
Perhaps we should have a discussion about making high-stakes urgent decisions under conditions of great uncertainty
Depends on your level of paranoia. They might still be after you. Lets have that discussion.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T11:54:28.036Z · LW(p) · GW(p)
It's not just them being after you-- sometimes medical decisions fall into the same category.
Replies from: MartinB↑ comment by MartinB · 2010-11-04T12:24:21.626Z · LW(p) · GW(p)
Maybe we should make a collection of realistic situations.
From a little thought I guess one tries to minimize damage, or maximize expected welfare. Both strategies need some calibration for realism.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T12:44:11.208Z · LW(p) · GW(p)
Yes, on the calibration-- in particular, how do you maintain focus to do the best you can with the information you've got? What sorts of information do you need?
I'm haunted by a quote from a holocaust survivor who said that he would have done things differently (presumably fled early) if he'd "had the soul of a poet". Hindsight is 20/20, but predicting from group emotional trends is sometimes part of what's necessary.
↑ comment by MichaelVassar · 2010-11-13T03:19:16.728Z · LW(p) · GW(p)
Prevent the Holocaust.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-13T03:23:58.166Z · LW(p) · GW(p)
How do you think that could have been done?
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-11-14T13:08:38.990Z · LW(p) · GW(p)
General principles. Doing things isn't ever that difficult relative to the psychological capabilities we casually assume ourselves to possess. We then fail to update correctly and include that goals are difficult rather than concluding that over long time horizons we don't work the way we very casually seem to over periods of a few minutes.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-14T14:26:53.418Z · LW(p) · GW(p)
On the other hand, the universe doesn't guarantee that apocalypse is scaled to your abilities.
It's plausible that the Holocaust could have been averted if people had done more to optimize their efforts against it, but by no means guaranteed.
↑ comment by simplyeric · 2010-11-04T18:58:10.509Z · LW(p) · GW(p)
"And when they came for me, there was no one left to speak out for me." —Martin Niemoeller
I think that quote speaks a little about the worst enemies within us, in purely clinical terms, that what's in the best interest of those with whom you don't necessarily explicitly associate yourself may also be in your own best interest.
The thing to keep in mind about the Jewish Holocaust is that it wasn't particularly unusual. It was unusual mostly in its location: it was rare to carry out such large scale atrocities ''in Europe''. Exterminations had been carried out by various states upon people in every other part of the world. Some were absolute, and entire races were exterminated. Hitler had great admiration for how the United States dealt with its native population. Sweden exterminated slaughtered whole groups in Africa. The list is not as short as we'd like it to be.
An interesting (and depressing) book: Exterminate All the Brutes by Sven Lindqvist
What I took from this book is that the enemy that is the holocaust situation is within us. The Jewish Holocaust was (unfortunately) not an outlier, but rather was/is in our culture or genes or humanity (I'm not sure I know which, although I tend towards the genetics).
Replies from: NancyLebovitz, smdaniel2↑ comment by NancyLebovitz · 2010-11-04T19:26:15.556Z · LW(p) · GW(p)
What is unusual (I think) about the Jewish Holocaust is that it wasn't part of a conquest. Jews were very well integrated into German society, and had never been at war with it. Any other similar cases?
Replies from: simplyeric, Jayson_Virissimo↑ comment by simplyeric · 2010-11-04T23:04:25.878Z · LW(p) · GW(p)
Maybe a more salient example than my integrated Native Americans: Protestants v. Catholics.
In certain circumstances it was simply war and/or strife.
("simply")
But, in situations where both groups were fully native, there were situation where one group would try to eliminate the other through legislation, deportation, and also extermination.
↑ comment by Jayson_Virissimo · 2010-11-04T19:33:58.240Z · LW(p) · GW(p)
Ukrainians and Poles in the U.S.S.R.? I guess it would depend on your definition of "conquest".
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-04T19:47:20.295Z · LW(p) · GW(p)
And possibly also "integrated"-- my impression is that Jews in Germany were less geographically concentrated, but even if true, that might be reaching for an argument.
Replies from: simplyeric↑ comment by simplyeric · 2010-11-04T22:42:12.773Z · LW(p) · GW(p)
I think that within the subset of United States's aggression against the Native American population, there were many instances where fully integrated people were subsequently persecuted and eliminated. Some of it was at the "frontiers", yes. But some of it was shopowners, millers, brewers...people who had fully adapted and in fact thrived within the europeanized colonies and later states.
This was still happening in the 1950's and 60's as well, with the flooding of native lands in the Dakotas, etc, where fully "Americanized" communities were eliminated through forced relocation.
↑ comment by smdaniel2 · 2010-11-08T05:46:37.740Z · LW(p) · GW(p)
godwin's law 101
Replies from: ata↑ comment by ata · 2010-11-08T05:54:28.094Z · LW(p) · GW(p)
The existence of Godwin's Law doesn't mean that nobody on the internet is allowed to mention the Holocaust, and it's not an automatic counterargument to any claim involving the Holocaust.
(Wikipedia: "The rule does not make any statement about whether any particular reference or comparison to Adolf Hitler or the Nazis might be appropriate, but only asserts that the likelihood of such a reference or comparison arising increases as the discussion progresses. It is precisely because such a comparison or reference may sometimes be appropriate, Godwin has argued that overuse of Nazi and Hitler comparisons should be avoided, because it robs the valid comparisons of their impact.")
Replies from: smdaniel2↑ comment by smdaniel2 · 2010-11-08T06:45:23.388Z · LW(p) · GW(p)
was not meant to be a counterargument..just an observational comment.
just think its nifty how it is a recognized phenomena that people tend to refer back to the same historical event to make strong points for a number of varying arguments.
comment by ata · 2010-11-09T05:00:29.164Z · LW(p) · GW(p)
Rationality quotes: very many from @BadDalaiLama on Twitter.
(Edit: there's also this handy archive.)
Replies from: cupholder, free_rip↑ comment by cupholder · 2010-11-10T12:56:26.512Z · LW(p) · GW(p)
This one felt quite LW-relevant:
If $1 million makes you happy, that doesn't mean $10 million will make you 10 times as happy.
It's good to be reminded now and then that dollars are not, in fact, utilons.
Replies from: shokwave↑ comment by shokwave · 2010-11-10T14:26:11.504Z · LW(p) · GW(p)
It's good to be reminded now and then that dollars are not, in fact, utilons.
The natural logarithm of dollars is a pretty good approximation of utilons, assuming you like candy-bars.
Replies from: NihilCredo, Unnamed, Psy-Kosh↑ comment by NihilCredo · 2010-11-10T20:32:51.783Z · LW(p) · GW(p)
With some constraints, of course.
"Here, have a penny."
"You bastard!"
↑ comment by Unnamed · 2010-11-11T20:59:13.952Z · LW(p) · GW(p)
Here's some evidence from Stevenson & Wolfers that happiness/life satisfaction is proportional to the log of income: blog post, pdf article.
↑ comment by Psy-Kosh · 2010-11-10T23:40:05.492Z · LW(p) · GW(p)
How does ln(dollars) approximate utilions? It's obvious that utilions are generally not fully linear in dollars, and they're certainly not equivalent, but how does the log of dollars, specifically, approximate utility?
Replies from: shokwave↑ comment by shokwave · 2010-11-11T06:18:11.427Z · LW(p) · GW(p)
If there is some mathematical reason why, I would love to know. I was going off the observation that the natural logarithm approximates the kind of diminishing returns that economists generally agree applies to the utility of wealth. This means that, very roughly, the logarithm of dollars is the 'revealed preference' utility.
It was actually more of a joke about that assumption, because it suggests that a 50 dollar meal is preferred four times as much to a 3 dollar candy bar - a bit odd, but perfectly natural if you like candy bars.
Replies from: JoshuaZ, Psy-Kosh↑ comment by JoshuaZ · 2010-11-11T06:26:09.148Z · LW(p) · GW(p)
Well, log does that. But so does square root also. Lots of functions have diminishing marginal returns.
Replies from: b1shop, shokwave↑ comment by b1shop · 2010-11-11T06:48:38.003Z · LW(p) · GW(p)
I can think of two good reasons to model diminishing returns with the natural log.
Logs produce nice units in the regression coefficients. A log-lin function (that is -- log'd dependent, linear independent) says that a percent increase in X results in a unit increase in Y. Similar statements are true for lin-log and log-log, the latter of which produces elasticities.
y=ln(x) and y=sqrt(x) will both fit data in a similar manner, so it makes sense to go with the one that makes for easy interpretation.
Additionally, the natural log frequently shows up in financial economics, most prominently in continuous interest but also notably in returns, which seem to follow the log-normal distribution.
Replies from: Manfred↑ comment by shokwave · 2010-11-11T06:40:48.380Z · LW(p) · GW(p)
Hmm. If we grab some study data on wealth's mathematical relationship with utility, we might be able to decide what function best approximates it. As it is, yeah, there is no reason to prefer log to square root to anything other function.
↑ comment by Psy-Kosh · 2010-11-11T18:46:27.784Z · LW(p) · GW(p)
Oooh, okay. Diminishing returns, certainly. Just not obvious that it would be "log" or near that.
It was actually more of a joke about that assumption, because it suggests that a 50 dollar meal is preferred four times as much to a 3 dollar candy bar - a bit odd, but perfectly natural if you like candy bars.
:)
comment by PeterS · 2010-11-03T21:51:17.833Z · LW(p) · GW(p)
Isaac Newton's argument for intelligent design:
Were all the planets as swift as Mercury or as slow as Saturn or his satellites; or were the several velocities otherwise much greater or less than they are (as they might have been had they arose from any other cause than their gravities); or had the distances from the centers about which they move been greater or less than they are (as they might have been had they arose from any other cause than their gravities); or had the quantity of matter in the sun or in Saturn, Jupiter, and the earth (and by consequence their gravitating power) been greater or less than it is; the primary planets could not have revolved about the sun nor the secondary ones about Saturn, Jupiter, and the earth, in concentric circles as they do, but would have moved in hyperbolas or parabolas or in ellipses very eccentric. To make this system, therefore, with all its motions, required a cause which understood and compared together the quantities of matter in the several bodies of the sun and planets and the gravitating powers resulting from thence.... And to compare and adjust all these things together in so great a variety of bodies, argues that cause to be, not blind and fortuitous, but very well skilled in mechanics and geometry.
-- Letter to Richard Bentley
Replies from: Tyrrell_McAllister, wedrifid↑ comment by Tyrrell_McAllister · 2010-11-03T23:26:44.548Z · LW(p) · GW(p)
Here's another Newton ID quote. This one complements PeterS's because the true naturalistic explanation requires physics that was not implicit in Newton's mechanics.
But how the matter should divide itself into two sorts, and that part of it, which is fit to compose a shining body, should fall down into one mass, and make a sun, and the rest, which is fit to compose an opaque body, should coalesce, not into one great body, like the shining matter, but into many little ones; or, if the sun, at first, were an opaque body, like the planets, or the planets lucid bodies, like the sun, how he alone should be changed into a shining body, whilst all they continue opaque, or all they be changed into opaque ones, whilst he remains unchanged, I do not think more explicable by mere natural causes, but am forced to ascribe it to the counsel and contrivance of a voluntary agent.
—Isaac Newton, Four Letters From Sir Isaac Newton To Doctor Bentley Containing Some Arguments In Proof Of A Deity.
↑ comment by wedrifid · 2010-11-04T18:15:43.232Z · LW(p) · GW(p)
Elements of this argument make an error related to numberplates. I'm surprised this was received so (+4) positively.
Replies from: Perplexed, magfrump, PeterS↑ comment by Perplexed · 2010-11-05T04:44:25.281Z · LW(p) · GW(p)
I voted up both Newton quotes because they show how a very smart man can make a very plausible argument which is nevertheless very wrong.
And the reason Newton failed to guess the rather simple explanation is that he observed a solar system that was stable and unchanging and assumed that it must always have been stable and unchanging since the creation. His "biases" just didn't allow him to imagine an evolutionary model of planet formation by accretion from a more-or-less random initial state.
Nowadays of course, we tend to invent evolutionary or historical explanations for everything. We don't even limit ourselves to explaining the origins. We go on to predict how things will likely come to a contingent historical end ... or should I refer to it as our next great adventure?
Replies from: None↑ comment by [deleted] · 2010-11-09T21:54:29.846Z · LW(p) · GW(p)
Second to this. The planets that remain in sequence and orbit survived the transition from entropy to stability in a way that didn't result in them being ejected or destroyed. Their presence represents them making it through the pachinko machine of amalgamated physical parameters, not intentional design.
Newton's inferences were like assuming a gold tooth has mystical properties because I've put you through a woodchipper and it's the only thing that came out the other end. There is so much to understand about the internals of the machine before you make any solid judgments about the inputs and outputs.
↑ comment by magfrump · 2010-11-05T17:23:48.596Z · LW(p) · GW(p)
I thought it was obviously ironic, since planets do actually move in ellipses and general conic sections; Newton makes a falsifiable claim in favor of ID and it is clearly false.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2010-11-05T21:29:18.715Z · LW(p) · GW(p)
Wait, something seems wrong here. Newton knew the planets moved in ellipses. Probable conclusion: He was just referring to the low eccentricity of these ellipses?
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-11-15T21:49:47.986Z · LW(p) · GW(p)
I think that the issue was the number of planets. If you had just one planet orbiting the sun, that orbit would be a nice stable one. But if you have multiple bodies orbiting the sun, their paths will interfere chaotically. I think that Newton expected that, in general, you would get wildly erratic orbits, with some planets being thrown clear of the system altogether. As I understand it, he expected such catastrophes to be inevitable, unless you started with a very carefully-selected initial state. God was then necessary to explain how the solar system started out in such an improbable state. But in fact Newton just lacked the mathematical sophistication to see that, according to his own theory, typical initial arrangements could result in systems that are stable for billions of years.
↑ comment by PeterS · 2010-11-05T04:13:58.902Z · LW(p) · GW(p)
Numberplates?
Replies from: wedrifid↑ comment by wedrifid · 2010-11-05T04:30:23.336Z · LW(p) · GW(p)
"The chance that the numberplate of my first car was EIT411 is one in a whole lot. Wow! It happened! There must be a God!" (crudely speaking.)
This seems to be relevant to, for example, yabbering on about the exact speeds of Saturn et. al. The Saturns that were going the wrong speed all fell in to the sun (or cleared off into space.)
Replies from: PeterS↑ comment by PeterS · 2010-11-05T10:27:48.423Z · LW(p) · GW(p)
Oh... I in no way endorse the above argument! Pierre-Simon Laplace's, a century or so after Newton, gave a naturalistic model of how the Solar System could have developed. "Rationality quotes" is not only about sharing words of wisdom, but also words of folly.
Replies from: wedrifidcomment by Tesseract · 2010-11-03T10:24:21.011Z · LW(p) · GW(p)
If oxen and horses and lions had hands and were able to draw with their hands and do the same things as men, horses would draw the shapes of gods to look like horses and oxen to look like oxen, and each would make the gods’ bodies have the same shape as they themselves had.
Xenophanes
Replies from: JoshuaZ, shokwave, Larks, simplyeric, majus↑ comment by JoshuaZ · 2010-11-03T18:50:47.547Z · LW(p) · GW(p)
I'm not sure this makes sense. Empirically many human cultures have deities that are shaped like animals.
Replies from: fortyeridania↑ comment by fortyeridania · 2010-11-08T15:49:08.484Z · LW(p) · GW(p)
Voted up. My quibble is that gods are often anthropomorphic in mind, if not in body.
↑ comment by simplyeric · 2010-11-04T17:43:45.628Z · LW(p) · GW(p)
There might be a strong chance that horses and other animals would draw their gods as having human form. Humans tend to protray their gods as being either equal or higher than humanity. Animist gods are protrayed as having characteristics that surpass humans: speed, wisdom, patience, etc. based on the characteristics of that animal. Alternately, sun gods, storm gods, etc.: higher powers.
Some wild horses would have horse gods or weather gods or wolf gods. Some might have human gods, depending on their interaction with humanity.
I'd imagine that domesticated horses would have human gods, some benevolent and some malignant, or both. And some domesticated horses would go "through the looking glass" and develop a horse-god of redemption, with prophecies of freeing them from the toil and slavery of domestication, based on some original downfall of horse-dom that led to them being subservient to humans.
Or something like that.
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2010-11-06T13:09:59.059Z · LW(p) · GW(p)
this should at the very least be turned into a short story.
comment by shokwave · 2010-11-18T16:14:27.797Z · LW(p) · GW(p)
From Brandon Sanderson's Mistborn series:
"Certainly, my situation is unique," Sazed said. "I would say that I arrived at it because of belief."
"Belief?" Vin asked.
"Yes," Sazed said. "Tell me, Mistress, what is it that you believe?"
Vin frowned. "What kind of question is that?"
"The most important kind, I think."
Vin thought, then shrugged. "I don't know what I believe."
"People often say that, but I find it is rarely true."
comment by shokwave · 2010-11-04T15:47:46.485Z · LW(p) · GW(p)
The course of human progress staggers like a drunk; its steps are quick and heavy but its mind is slow and blunt
-Jesse Michaels of Operation Ivy
Posted because it's a useful and evocative metaphor: the drunk feels himself leaning or falling in one direction, and puts his foot down in that direction to steady himself. If he doesn't step far enough, he is still leaning in the same direction, and he steps again. In this way we can make fantastic progress in directions we don't like while getting further away from the ways we did want to go.
comment by XiXiDu · 2010-11-04T12:59:29.521Z · LW(p) · GW(p)
I just came across this and thought it was a pretty funny dialogue: "Reality is that which does not go away upon reprogramming." (Check the first 4 comments here: Chatbot Debates Climate Change Deniers on Twitter so You Don’t Have to)
This is of course a paraphrase borrowed from Philip K. Dick's famous statement:
Replies from: NihilCredoReality is that which, when you stop believing in it, doesn't go away.
↑ comment by NihilCredo · 2010-11-04T21:08:04.005Z · LW(p) · GW(p)
I shared this on another website and got this comment:
Replies from: Tuna-FishHeh, that's one way to pass the Turing Test. Don't make your bot smarter, make it seek out dumb people.
↑ comment by Tuna-Fish · 2010-11-05T12:33:17.523Z · LW(p) · GW(p)
This has been done for a while. A few years ago there was some noise about a russian chatbot which impersonated a good-looking girl and tried to scam people to give personal information and/or money.
Every time it succeeded, it passed the turing test.
comment by Richard_Kennaway · 2010-11-02T21:30:40.341Z · LW(p) · GW(p)
'Tis with our Judgments as our Watches, none
Go just alike, yet each believes his own.
Pope, Essay on Criticism
comment by Zetetic · 2010-11-08T21:37:48.518Z · LW(p) · GW(p)
Research must contrive to do business at a profit, by which I mean it must produce more effective scientific inquiry than it expends. No doubt it already does so. But it would do well to become conscious of it's economic position and contrive ways of living upon it. -CS Peirce
comment by Mass_Driver · 2010-11-08T19:13:12.251Z · LW(p) · GW(p)
And should you ask yourselves, "How can we know that the oracle was not spoken by the Lord?" -- if the prophet speaks in the name of the Lord and the oracle does not come true, that oracle was not spoken by the Lord; the prophet has uttered it presumptuously: do not stand in dread of him.
Deuteronomy 18:20-22
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-11-10T23:22:42.707Z · LW(p) · GW(p)
(NIV Matthew 10:23) When you are persecuted in one place, flee to another. I tell you the truth, you will not finish going through the cities of Israel before the Son of Man comes.
(NIV Matthew 16:27-28) For the Son of Man is going to come in his Father’s glory with his angels, and then he will reward each person according to what he has done. I tell you the truth, some who are standing here will not taste death before they see the Son of Man coming in his kingdom.
(NIV Matthew 24:34) I tell you the truth, this generation will certainly not pass away until all these things [the end times] have happened.
comment by joschu · 2010-11-03T07:05:45.862Z · LW(p) · GW(p)
Out-of-sample error equals in-sample error plus a penalty for model complexity
Y.S. Abu-Mostafa, in explaining the VC inequality of PAC learning.
Replies from: Daniel_Burfoot↑ comment by Daniel_Burfoot · 2010-11-03T12:05:05.101Z · LW(p) · GW(p)
This quote leaves out the 'P' part of "PAC"! It should really read,
Out-of-sample error equals in-sample error plus a penalty for model complexity... probably.
comment by sixes_and_sevens · 2010-11-02T21:06:51.040Z · LW(p) · GW(p)
There are times I almost think
Nobody sure of what he absolutely know
Everybody find confusion
In conclusion he concluded long ago
And it puzzle me to learn
That tho' a man may be in doubt of what he know,
Very quickly he will fight...
He'll fight to prove that what he does not know is so!
comment by XiXiDu · 2010-11-21T13:56:49.689Z · LW(p) · GW(p)
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
by Geoffrey Pullum
No program can say what another will do. Now, I won't just assert that, I'll prove it to you: I will prove that although you might work till you drop, you can't predict whether a program will stop.
Imagine we have a procedure called P that will snoop in the source code of programs to see there aren't infinite loops that round and around; and P prints the word "Fine!" if no looping is found.
You feed in your code, and the input it needs, and then P takes them both and it studies and reads and computes whether things will all end as they should (as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be, because if you wrote it and gave it to me, I could use it to set up a logical bind that would shatter your reason and scramble your mind.
Here's the trick I would use—and it's simple to do. I'd define a procedure—we'll name the thing Q— that would take any program and call P (of course!) to tell if it looped, by reading the source;
And if so, Q would simply print "Loop!" and then stop; but if no, Q would go right back up to the top, and start off again, looping endlessly back, till the universe dies and is frozen and black.
And this program called Q wouldn't stay on the shelf; I would run it, and (fiendishly) feed it itself. What behavior results when I do this with Q? When it reads its own source code, just what will it do?
If P warns of loops, Q will print "Loop!" and quit; yet P is supposed to speak truly of it. So if Q's going to quit, then P should say, "Fine!"— which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it: Q uses P's output to make P look stupid. If P gets things right then it lies in its tooth; and if it speaks falsely, it's telling the truth!
I've created a paradox, neat as can be— and simply by using your putative P. When you assumed P you stepped into a snare; Your assumptions have led you right into my lair.
So, how to escape from this logical mess? I don't have to tell you; I'm sure you can guess. By reductio, there cannot possibly be a procedure that acts like the mythical P.
You can never discover mechanical means for predicting the acts of computing machines. It's something that cannot be done. So we users must find our own bugs; our computers are losers!
I came across this yesterday. The blog might also be worth a look, see for example 'A Brief History of Grammar'.
comment by Mass_Driver · 2010-11-08T19:13:41.470Z · LW(p) · GW(p)
The Universe behaves according to its own laws.
Talmud, Avoda Zara 54b
comment by XiXiDu · 2010-11-29T10:13:54.977Z · LW(p) · GW(p)
Most of the founding Zetas members–the original 40–were trained elite soldiers who received instruction in radio communications, counter-insurgency and drug-interdiction. But the Army forgot to add a few ethics lessons into the education mix. And that was a big fucking mistake.
comment by gwern · 2010-11-26T20:02:40.874Z · LW(p) · GW(p)
"...for we judge ourselves by what we feel capable of doing, while others judge us by what we have already done."
--Henry Wadsworth Longfellow, Kavanagh ch. 1
Replies from: Document↑ comment by Document · 2011-07-08T18:49:27.570Z · LW(p) · GW(p)
Related: correspondence bias.
comment by oliverbeatson · 2010-11-11T18:37:36.815Z · LW(p) · GW(p)
I chose and my world was shaken, so what? The choice may have been mistaken; the choosing was not.
Sunday in the Park with George, by Stephen Sondheim
comment by ata · 2010-11-10T05:04:36.284Z · LW(p) · GW(p)
A universe that needed someone to observe it in order to collapse it into existence would be a pretty sorry universe indeed.
— Randall Munroe, xkcd – Mutual
Replies from: Spurlock↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-03T04:55:57.159Z · LW(p) · GW(p)
Don't feed the trolls, people, this is not reddit.
comment by NancyLebovitz · 2010-11-17T17:53:12.060Z · LW(p) · GW(p)
For the young who want to by Marge Piercy
The poem is mostly about not being recognized as having a magical ability to do things until after you've succeeded. I'm just posting the link because it's more trouble than it's worth to make the line breaks show up properly.
comment by gwern · 2010-11-16T00:09:24.790Z · LW(p) · GW(p)
"Mortal danger is an effective antidote for fixed ideas."
Erwin Rommel, The Rommel Papers (1982) edited by Basil Henry Liddell Hart http://en.wikiquote.org/wiki/Erwin_Rommel#Sourced
Replies from: jaimeastorga2000↑ comment by jaimeastorga2000 · 2010-11-16T03:02:50.146Z · LW(p) · GW(p)
This reminds me of the phrase "nobody learns faster than someone who is being shot at". Considering all the technological research done in war time, there seems to be a good point about motivation.
comment by realitygrill · 2010-11-09T16:19:41.288Z · LW(p) · GW(p)
"But building your life's explanations around science isn't a profession. It is, at its core, an emotional contract, an agreement to only derive comfort from rationality."
-Robert Sapolsky, in a essay reply to "Does science make belief in God obsolete?"
comment by phaedrus · 2010-11-16T00:03:27.756Z · LW(p) · GW(p)
"All of us, grave or light, get our thoughts entangled in metaphors, and act fatally on the strength of them."
---George Eliot, "Middlemarch"
Replies from: Craig_Heldreth↑ comment by Craig_Heldreth · 2010-11-16T00:25:09.995Z · LW(p) · GW(p)
Somebody else read the comments section in Sapolsky's New York Times op ed today.
His column had a rough explanation of human oddities explained as evolutionary adaptations.
(If you sort the comments by largest approval rating there are several interesting ones.)
comment by free_rip · 2010-11-08T09:17:44.102Z · LW(p) · GW(p)
When one person suffers from a delusion it is called insanity; when many people >suffer from a delusion it is called religion.
~ Robert M. Pirsig
Now the actual quote's out of the way, here's my version: when one person suffers from a delusion it is called insanity; when many people suffer from a delusion it is called society.
Replies from: MartinBcomment by anonym · 2010-11-03T06:27:08.675Z · LW(p) · GW(p)
Truth will sooner come out from error than from confusion.
Francis Bacon
Replies from: Richard_Kennaway, simplyeric↑ comment by Richard_Kennaway · 2010-11-03T11:01:35.473Z · LW(p) · GW(p)
Replies from: Mass_Driver, anonym↑ comment by Mass_Driver · 2010-11-08T19:15:59.845Z · LW(p) · GW(p)
Wait, is there a convenient way to check and see which ones have been posted so far? You know, in case you don't have an encyclopedic memory of all the quotes posted over the last 18 months?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-11-08T19:25:07.899Z · LW(p) · GW(p)
Use the search box to check any quotation you're thinking of posting.
↑ comment by simplyeric · 2010-11-04T19:01:08.457Z · LW(p) · GW(p)
I'm of the mind that politically, in the US at least, we don't seem to learn from this. The truth is, indeed, revealed....but the confusion remains and the errors continue.
There are many who disagree with me about that...
but that's because they're confused AND in error.
(ok ok I kid on that last part...)
comment by [deleted] · 2010-11-02T22:07:31.386Z · LW(p) · GW(p)
Not a quote about rationalism, but probably relevant to Less Wrong:
Pure Death
We looked, we loved, and therewith instantly
Death became terrible to you and me.
By love we disenthralled our natural terror
From every comfortable philosopher
Or tall, grey doctor of divinity:
Death stood at last in his true rank and order.
It happened soon, so wild of heart were we,
Exchange of gifts grew to a malady:
Their worth rose always higher on each side
Till there seemed nothing but ungivable pride
That yet remained ungiven, and this degree
Called a conclusion not to be denied.
Then we at last bethought ourselves, made shift
And simultaneously this final gift
Gave: each with shaking hands unlocks
The sinister, long, brass-bound coffin-box,
Unwraps pure death, with such bewilderment
As greeted our love's first accomplishment.
--Robert Graves
Replies from: gwern↑ comment by gwern · 2010-11-26T20:16:11.255Z · LW(p) · GW(p)
I read this one last week:
"When human beings found out about death
They sent the dog to Chukwu with a message:
They wanted to be let back to the house of life.
They didn't want to end up lost forever
Like burnt wood disappearing into smoke
And ashes that get blown away to nothing.
Instead, they saw their souls in a flock at twilight
Cawing and headed back for the same old roosts
(The dog was meant to tell all this to Chukwu)..."
--"A Dog Was Crying Tonight in Wicklow Also", Seamus Heaney, pg 66, The Spirit Level (1996)
comment by SK2 (lunchbox) · 2010-11-04T07:13:54.149Z · LW(p) · GW(p)
All my confidence comes from knowing God's laws.
-- Talib Kweli (substitute "nature" for "God")
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2010-11-04T09:56:31.875Z · LW(p) · GW(p)
-- Talib Kweli (substitute "nature" for "God")
I don't think it would be a good idea to take a Carl Sagan quote and add a 'substitute "God" for "nature"' postscript. I don't think this is a good idea either.
Replies from: lunchbox↑ comment by SK2 (lunchbox) · 2010-11-04T15:09:49.431Z · LW(p) · GW(p)
Talib Kweli is nonreligious, so I'm not changing the meaning of the quotation. "God" is often used poetically. Example:
"Subtle is the Lord, but malicious He is not."
Albert Einstein
Even if Kweli were religious the point would not be to put words in his mouth, but to reapply a beautiful quotation to another context where it is meaningful.
Replies from: smdaniel2↑ comment by smdaniel2 · 2010-11-08T06:09:04.951Z · LW(p) · GW(p)
reapplying it to another context changes the meaning. because of einstein's explicitly stated opinions on the meaning of God (and the Lord), we can understand his meaning to be synonymous with that of nature and its order.
"I believe in Spinoza's God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with the fates and actions of human beings."
"I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it. " - 1936
Talib Kweli, on the other hand, hasn't given us a clear opinion of his thoughts on the term God. There is no evidence for us to assume that the meaning he gives to the term God would fit in the context of this quote.
comment by neq1 · 2010-11-03T01:20:47.857Z · LW(p) · GW(p)
Justice is an artefact of custom. Where customs are unsettled its dictates soon become dated. Ideas of justice are as timeless as fashions in hats.
-John Gray, Straw Dogs
Replies from: jfm, Jayson_Virissimo↑ comment by jfm · 2010-11-03T14:38:59.530Z · LW(p) · GW(p)
Natural justice is a pledge of reciprocal benefit, to prevent one man from harming or being harmed by another.
Those animals which are incapable of making binding agreements with one another not to inflict nor suffer harm are without either justice or injustice; and likewise for those peoples who either could not or would not form binding agreements not to inflict nor suffer harm.
There never was such a thing as absolute justice, but only agreements made in mutual dealings among men in whatever places at various times providing against the infliction or suffering of harm.
~ Epicurus, Principal Doctrines
↑ comment by Jayson_Virissimo · 2010-11-03T02:18:15.508Z · LW(p) · GW(p)
...justice is rooted in a system of conventions. They arise spontaneously as behavioural equilibria that bring mutual advantage to those adopting them. They protect life, limb, property and the pursuit of peaceful purposes, and require the fulfilment of reciprocal promises.
-Anthony de Jasay, Inspecting the Foundations of Liberalism
Conventions against torts like murder and theft are older than civilization. I think it is a safe bet they will still be around in a thousand years.
comment by neq1 · 2010-11-03T01:18:30.061Z · LW(p) · GW(p)
Who has not experienced the chilling memory of the better things? How it creeps over the spirit of one's current dreams! Like the specter at the banquet it stands, its substanceless eyes viewing with a sad philosophy the make-shift feast.
-Theodore Dreiser, The Titan
comment by sixes_and_sevens · 2010-11-02T21:05:19.536Z · LW(p) · GW(p)
There are times I almost think, Nobody sure of what he absolutely know. Everybody find confusion, In conclusion he concluded long ago. And it puzzle me to learn , That tho' a man may be in doubt of what he know, Very quickly he will fight... He'll fight to prove that what he does not know is so!
comment by sixes_and_sevens · 2010-11-02T21:00:14.865Z · LW(p) · GW(p)
There are times I almost think Nobody sure of what he absolutely know Everybody find confusion In conclusion he concluded long ago And it puzzle me to learn That tho' a man may be in doubt of what he know, Very quickly he will fight... He'll fight to prove that what he does not know is so!
comment by yoj1mbo · 2010-11-04T21:36:31.937Z · LW(p) · GW(p)
"However insistently the blind may deny the existence of the sun, they cannot annihilate it. " - D. T. Suzuki
Replies from: wedrifid, MartinB↑ comment by wedrifid · 2010-11-04T21:44:34.936Z · LW(p) · GW(p)
"However insistently the blind may deny the existence of the sun, they cannot annihilate it. " - D. T. Suzuki
Want to bet?
(At stakes of a few thousand galaxies worth of energy and negentropy. It's not going to be cheap to win this bet! I'm not too comfortable with the whole making myself blind thing either but I guess I can rectify that once I finish deploying the antimatter disruptor beam.)
Replies from: ciphergoth, wedrifid, yoj1mbo↑ comment by Paul Crowley (ciphergoth) · 2010-11-05T12:13:21.729Z · LW(p) · GW(p)
"Since the beginning of time, man has yearned to destroy the Sun."
Replies from: David_Gerard↑ comment by David_Gerard · 2010-11-18T21:51:51.070Z · LW(p) · GW(p)
"Since the beginning of semester, man has yearned to destroy the Sun."
↑ comment by wedrifid · 2010-11-04T21:51:13.486Z · LW(p) · GW(p)
I just noticed that I implicitly assumed that it would have to be me that blinded himself. What sort of nefarious sun destroying intergalactic mastermind would I be if did foist that role upon a henchman?
Replies from: Snowyowl↑ comment by Snowyowl · 2010-11-05T11:51:44.253Z · LW(p) · GW(p)
You're going to have trouble destroying the Sun if you don't believe it exists.
Replies from: Larks↑ comment by Larks · 2010-11-05T14:53:05.397Z · LW(p) · GW(p)
He only has to deny that it exists.
Alternatively, he could lock himself onto a sun-destroying path, and then forcibly do an unBayesian update away from the existence of the sun.
Alternately, he could interpret the sentence literally, note that 'not at all' is a level of insistence, deny the existence of the sun not at all, and then destroy it.
↑ comment by yoj1mbo · 2010-11-07T01:34:40.787Z · LW(p) · GW(p)
You are mistaking the map for the territory. It doesn't matter if its a quark-pair or a hyper colossal cosmic structure gravitationally influencing everything in the universe; if a condition is present, then it has effect.
Replies from: wedrifid↑ comment by MartinB · 2010-11-08T10:46:07.045Z · LW(p) · GW(p)
Humans are blind to all kinds of things. Radiation for one. But it still can be detected and controlled. A civilization of blind people would eventually build detecting equipment and learn to control the real world.
Replies from: Documentcomment by gwern · 2010-11-10T23:33:36.957Z · LW(p) · GW(p)
"Normal humans don't interest me. If anyone here is an alien, a time traveler, slider, or an esper, then come find me! That is all."
--The Melancholy of Haruhi Suzumiya, vol 1
Replies from: NihilCredo↑ comment by NihilCredo · 2010-11-10T23:59:40.287Z · LW(p) · GW(p)
Rationality?