David Deutsch on How To Think About The Future

post by curi · 2011-04-11T07:08:42.530Z · LW · GW · Legacy · 199 comments

Contents

199 comments

http://vimeo.com/22099396

What do people think of this, from a Bayesian perspective?

It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

199 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2011-04-09T20:14:40.539Z · LW(p) · GW(p)

Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.

As illustration, among other things, he lists some failed predictions made by smart people in the past, attributing failure to unavailability of the ideas relevant for the predictions, ideas that will only be discovered much later.

[Science can't] predict any phenomenon whose course is going to be affected by the growth of knowledge, by the creation of new ideas. This is the fundamental limitation on the reach of scientific explanation and prediction.

[Predictions that are serious attempts to extract unknowable answers from existing knowledge] are going to be biased towards bad outcomes.

(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)

Deutsch explains:

And the basic reason for that is that, as I said, the growth of knowledge is good, so that kind of prophesy, which can't imagine it, is going to be biased against prophesying good.

Reason and science are the means to progress. They are not means to prophesy.

On a more constructive if not clearly argued note:

Merely pulling the trigger less often doesn't change the inevitability of doom. [...] One of the most important uses of technology is to counteract disasters and to recover from disasters, both from foreseen and unforeseen evil. Therefore, the speed of progress itself is one of the things that is a defense against catastrophe.

The speed of progress is one of the things that gives the good guys the edge over the bad guys, because good guys make faster progress.

(Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys. Quite probably, there was better reasoning behind this argument, but Deutsch doesn't give it, and doesn't hint at its existence, probably because he considers the conclusion obvious, which is in any case a flaw of the talk.)

For the next 10 minutes or so he argues for the possibility of essentially open-ended technological progress.

The amount of knowledge in an environment of rational thought that allows it to grow, grows exponentially relative to the speed of computation.

[...] It's a mistake to think of the so-called singularity as being a shock, where we find that we can't cope with life, because iPhone updates are coming [...] every second. That's a mistake, because when progress reaches that speed, our technologically enhanced speed of thinking will have increased in proportion, and so subjectively again we will experience mere exponential growth.

Here, Deutsch seemingly makes the same mistake he discussed at the beginning of the talk: making detailed predictions about future technology that depend on the set of technology-defining ideas presently available (which, by his own argument, can lead to underestimation of progress).

The conclusion is basically a better version of Kurzweil's view of Singularity, that ordinary technological progress is going to continue indefinitely (Deutsch's progress is exponential in subjective time, not in physical time). Yudkowsky wrote in 2002:

I've come to the conclusion that what Kurzweil calls the "Singularity" is what we would call "the ordinary progress of technology." In Kurzweil's world, the Grinding Gears of Industry churn out AI, superhuman AI, uploading, brain-computer interfaces and so on, but these developments do not affect the nature of technological progress except insofar as they help to maintain Kurzweil's curves exactly on track.

Deutsch considers Popper's views on the process of development of knowledge, pointing out that there are no reliable sources of knowledge, and so instead we should turn to finding and correcting errors. From this he concludes:

Optimism demands that we not try to extract prophesies of everything that could go wrong in order to forestall it from our existing scanty and misconception-laden existing knowledge. Instead, we need policies and institutions that are capable of correcting mistakes and recovering from disasters when they happen. When, not if.

(This doesn't terribly help with existential risks. Also, this optimism thing seems to be one magically reliable source of knowledge, strong enough to ignore whatever best conclusions it is possible to draw using the best tools currently available, however poor they seem on the great cosmic scale.)

The way to prevent that nightmare of rogue AI apocalypse is not try to enslave our AIs, because if the AIs are creating new knowledge (and that's a definition of AI), then successfully enslaving them would require foretelling (prophesying) the ideas that they could have, and the consequences of those ideas, which is impossible.

This was addressed in Knowability of Friendly AI and many later Yudkowsky's writings, most recently in his joint paper with Bostrom. Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

Deutsch continues:

So instead, just as for our fellow humans, and for the same reason, we must allow AIs to integrate into the institutions of our open society.

(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)

The only moral values that permit sustained progress are the objective values of an open society and more broadly of the enlightenment. No doubt, the [extraterrestrials'] morality would not be the same as ours, but nor will it be the same as that of 16th century conquistadors. It will be better than ours.

Finally, Deutsch summarizes the meaning of the overarching notion of "optimism" he has been using throughout the talk:

Optimism in this sense that I have argued for is not a feeling, is not a bias or spin that we put on facts, like, you know, half-full instead of half-empty, nor on predictions, it's not hope for the best, nor blind expectation of the best (in some sense it's quite the contrary, we expect errors). It is a cold, hard, far-reaching implication of rejecting irrationality, nothing else. Thank you for listening.

(No good questions in the quite long Q&A session. No LWers in the audience, I guess, or only the shy ones.)

Replies from: timtyler, XiXiDu, XiXiDu, curi, timtyler, curi, curi, curi, curi
comment by timtyler · 2011-04-15T14:31:58.828Z · LW(p) · GW(p)

Deutsch argues that the future is fundamentally unpredictable, that for example expected utility considerations can't be applied to the future, because we are ignorant of the possible outcomes and intermediate steps leading to those outcomes, and the options that will be available; and there is no way to get around this. The very use of the concept of probability in this context, Deutsch says, is invalid.

This bit starts about 12 minutes in. It is complete nonsense - Deutsch does not have a clue about the subject matter he is talking about :-(

comment by XiXiDu · 2011-04-10T11:02:18.891Z · LW(p) · GW(p)

Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

This is a really good point. When I read it I first thought I would have to disagree, after all we've designed the chess AI and therefore do understand it. But since I am currently reading Daniel Dennett's 'Darwin's Dangerous Idea' my next thought was that disagreeing with it seems to be a general bias assuming that a design is always inferior to its designer. But it should be obvious that our machines are faster and stronger than us, why not better thinkers too?

Unlike the blind idiot God we can pinpoint our own flaws and devise solutions but are also unable to apply them to ourselves effectively, which will be realized by the next level of self-redesigning things. But even now our designs can be superior to us as they mirror our own improved upon capabilities, our skills minus our flaws. We are still able to understand our machines but unable to mimic their capabilities as we've been able to recreate some of our skills but haven't been able to benefit from the improvements we devised. We know that steel is tougher than bones, beware "steel" that knows this fact as well.

comment by XiXiDu · 2011-04-10T11:17:27.274Z · LW(p) · GW(p)

Basically, you can't predict the moves of a good chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

I just realized you tried to make a different point here. That one can prove the behavior of computationally unpredictable systems. Reminds me of the following:

6) Disproving mathematical proofs within the terms of their own definitions. This falls within the realm of self-contradiction. No transapient has disproved the Pythagorean Theorem for Euclidean spaces as defined by classical Greek mathematicians, for instance, or disproved Godel's Incompleteness Theorem on its own terms. (Encyclopedia Galactica - Limits of Transapient Power)

Sounds reasonable but I have no idea to what extent one could prove "friendliness" while retaining a degree of freedom that would allow a seed AI to recursively-selfimprove towards superhuman intelligence quickly. Intuitively it seems to me that the level of abstraction of a definition of "friendliness" will be somehow correlated with the capability of an AGI.

comment by curi · 2011-04-09T23:24:59.061Z · LW(p) · GW(p)

(Presumably, since the AIs are unpredictable, and technology, Optimism demands that we all live happily ever after.)

No. Deutsch's "principle of optimism" states:

All evils are caused by insufficient knowledge.

optimism demands that they can live happily ever after if they learn how. it does not predict that they will.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-09T23:57:26.786Z · LW(p) · GW(p)

Agreed. The "we all live happily ever after" inference does contradict Deutsch's idea, which I noticed a little after writing this, and so corrected the wording (before seeing your comment) thusly:

(Or, presumably, so Optimism demands, since the AIs are unpredictable, and technology.)

comment by timtyler · 2011-04-15T18:29:56.896Z · LW(p) · GW(p)

Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys.

This is surely a real effect. The government is usually stronger than the mafia. The army is stronger than the terrorists. The cops usually beat the robbers, etc.

comment by curi · 2011-04-09T22:12:45.494Z · LW(p) · GW(p)

(Possibly an example of the halo effect: the good guys are good, the progress is good, so the good guys will make faster progress than the bad guys. Quite probably, there was better reasoning behind this argument, but Deutsch doesn't give it, and doesn't hint at its existence, probably because he considers the conclusion obvious, which is in any case a flaw of the talk.)

He doesn't consider it obvious. He considers nothing obvious in general (in a serious, not vacuous way). This in particular he has thought about, not because it is obvious but because it isn't.

The basic reason "good guys" make progress faster than "bad guys" (in the sense of: immoral guys, like prone to violence) is that they have more stable, peaceful, cooperative societies that are better suited to making progress. It's because good values are more effective in real life.

There's discussion of this stuff in his book The Beginning of Infinity.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-09T22:40:46.273Z · LW(p) · GW(p)

The basic reason "good guys" make progress faster than "bad guys" (in the sense of: immoral guys, like prone to violence) is that they have more stable, peaceful, cooperative societies that are better suited to making progress. It's because good values are more effective in real life.

This sort of claim seems to run into historical problems. A lot of major expansionist violent empires have done quite well for themselves. In modern times, some of the most "bad" groups have done well as well. The Nazis in many ways had much better technology than the Allies. If they hadn't been ruled by an insane dictator they would have done much better. Similarly, if they had expanded just as much but waited to start the serious discrimination and genocide until after they already had won they would have likely won. Similarly, in WW2, Japan did quite well for itself, and if a handful of major battles had gone slightly differently, the outcome would have been very different.

Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies. In North America, you actually had multiple different European groups fighting amongst themselves as well and yet they still won.

Overall, this is a pleasant, optimistic claim that seems to be depressingly difficult to reconcile with actual history.

Replies from: Randaly, Vladimir_M, curi
comment by Randaly · 2011-04-10T01:10:46.381Z · LW(p) · GW(p)

It's worth noting that most of the Nazi superiority in technology wasn't actually due to Nazi efforts, but rather due to a previous focus on technological and scientific development; for example, Germans won 14 of the first 31 Nobel Prizes in Chemistry, the vast majority of initial research into quantum mechanics was done by Germans, etc. But Nazi policies actually did actively slow down progress, by e.g. causing the emigration of free-thinking scientists like John von Neumann, Hans Bethe, Leo Szilard, Max Born, Erwin Schrodinger, and Albert Einstein, and by replacing empirically based science with inaccurate political ideology. (Hitler personally believed that the stars were balls of ice, tried to avoid harmful "earth-rays" mapped out for him with a dowsing rod, and drank a toxic gun-cleaning fluid for its supposed health benefits, not to mention his bizarre racial theories.) Membership in the Society of German Natural Researchers and Physicians shrank nearly in two between 1929 and 1937; during World War II, nearly half of German artillery came from its conquered neighbors, its supply system relied in part on 700,000-2,800,000 horses, its tanks and aircraft were in many ways technologically inferior to those of many of its neighbors, etc.

"If they hadn't been ruled by an insane dictator they would have done much better. Similarly, if they had expanded just as much but waited to start the serious discrimination and genocide until after they already had won they would have likely won."

But that's Deutch's entire point- that that's what the "bad guys" do, what makes them the "bad guys". Sure if Hitler hadn't been Hitler, or somehow not been human, German science wouldn't have been at a massive disadvantage. But I don't see much evidence that the "bad guys" have an advantage; at best, if you assume best case conditions and that the "bad guys" don't act like humans, you get an equal playing field.

(And we see similar things among the other "bad guys" of history- Lysenkoism, the Great Leap Forwards, etc.)

"Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies."

Conditions then no longer hold; nations are no longer isolated, the ideas of science/democracy/capitalism are fairly generally known, etc. And it's also worth noting that the colonizers have generally been transformed into "good guys".

Replies from: Vladimir_M, JoshuaZ, Desrtopa
comment by Vladimir_M · 2011-04-12T03:18:11.618Z · LW(p) · GW(p)

during World War II, nearly half of German artillery came from its conquered neighbors, its supply system relied in part on 7,000 horses,

According to this article published by the German Federal Archives, 2.8 million horses served in the German armed forces in WW2. The article also notes how successfully the German wartime propaganda portrayed the Wehrmacht as a high-tech motorized army, an image widely held in the public to this day, while in reality horses were its main means of transport.

comment by JoshuaZ · 2011-04-10T01:15:29.557Z · LW(p) · GW(p)

You make a very strong case that the Nazi example does go in the other direction. I withdraw that example. If anything it goes strongly in favor of Deutsch's point.

I'm not convinced by the relevancy of your point about the historical state during the colonization of North America. The point is not whether or not someone eventually transformed, the point is that violent, expansionist groups can win over less expansionist groups.

Replies from: curi
comment by curi · 2011-04-10T01:23:50.682Z · LW(p) · GW(p)

Deutsch's definition of "the bad guys" is not the most expansionist groups.

He would regard the colonizers as the good guys (well, better guys) because their society was less static, more open to improvement, more tolerant of non-conformist people, more tolerant of new ideas, more free, and so on. There's a reason the natives had worse technology and their culture remained static for so long: they had a society that squashes innovation.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-10T01:27:26.614Z · LW(p) · GW(p)

You'd have to convince me that they were more open to non-conformists. A major cause of the European colonization was flight of non-conforming groups (such as the Puritans) to North America where they then proceeded to persecute everyone who disagreed with them.

There's a reason the natives had worse technology and their culture remained static for so long: they had a society that squashes innovation.

I'm curious what you think of "Guns, Germs, and Steel" or similar works. What causes one society or another to adopt or even make innovations can be quite complicated.

Replies from: Randaly, curi
comment by Randaly · 2011-04-10T21:09:15.801Z · LW(p) · GW(p)

The Renaissance/much of modern science originated in Italy, not in England (thus, e.g. Galileo, da Vinci, etc.) And the Italian city-states of the time were fairly free: Pisa, Milan, Arezzo, Lucca, Bologna, Siena, Florence, and Venice were all at some point governed by elected officials. They were also remarkably meritocratic: as the influential Neapolitan defender of atomism Francesco D'Andrea put it, describing Naples:

There is no city in the world where merit is more recognized and where a man who has no other asset than his own worth can rise to high office and great wealth.

(Even if he's only boasting about his own city-state, it's significant that meritocracy was considered worth boasting about.)

Similarly, merchants, not priests, politicians, etc. were considered the highest status group: nobles up to and including national leaders (e.g. the Doge of Venice) dressed like merchants.

(Incidentally, the other factors you mentioned below also played a role: competition between city-states and the influence of outside science from Byzantium and the Islamic world showing what could be done. Nevertheless, Italian freedoms were also necessary: e.g. Galileo was only able to publish his ideas because he lived in the free Republic of Venice, where Jesuits were banned and open inquiry encouraged; he was persecuted and forced to recant his theories when he moved to Tuscany.)

comment by curi · 2011-04-10T01:46:29.558Z · LW(p) · GW(p)

read The Beginning of Infinity by Deutsch. It discusses that Diamond book and other similar works.

Yes European society was not favorable to non-conformists. One period I've studied, which is later (so, i think, better in this regard) is around 1790 ish. At that time, to take one example, the philosopher william godwin's wife died in childbirth and he published memoirs and people got really pissed off because she had had sex out of wedlock and stuff along those lines. when godwin's daughter ran off with shelley there were rumors he had sold her. meanwhile, for example, there was lots of discrimination against irish catholics. i know some stuff about how biased and intolerant people can be.

but what i also know is a bit about static societies (again, see the book for more details, or at least check out my website, e.g. http://fallibleideas.com/tradition).

when a society doesn't change for thousands of years that means it's even harsher than the european society i was talking about. preventing change for such a long period is hard. stuff is done to prevent it. the non-conformists don't even get off the ground. everyone's spirits are squashed in childhood -- thoroughly -- and so the adults don't rebel at all. if there were adults who were eccentric then the society simply wouldn't stay the same so long. european society was already getting fairly near fairly rapid changes (e.g. industrial revolution) when it started colonizing the new world.

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2011-04-10T02:01:12.046Z · LW(p) · GW(p)

when a society doesn't change for thousands of years that means it's even harsher than the european society i was talking about.

This doesn't follow. (Incidentally, I don't know why you sometimes drop back to failing to capitalize but it makes what you write much harder to read.) For example, if one doesn't have good nutrition then people won't be as smart and so won't innovate. Similarly, if one doesn't have free time people won't innovate. Some technologies and cultural norms also reinforce innovation. For example, having a written language allows a much larger body of ideas, and having market economies gives market incentives to coming up with new technologies.

Moreover, innovation can occur directly through competition. When you are convinced that your religion or tribe is the best and that you need to beat the others by any means necessary you'll do a lot better at innovating.

There's also a self-reinforcing spiral: the more you innovate the more people think that innovation is possible. If your society hasn't changed much then there's no reason to think that new technologies are easy to find.

There's no reason to think that Native American populations were systematically preventing change. There's a very large difference between having infrastructural and systemic issues that make the development of new technologies unlikely and the claim that "everyone's spirits are squashed in childhood -- thoroughly".

Replies from: curi
comment by curi · 2011-04-10T04:12:26.067Z · LW(p) · GW(p)

(Incidentally, I don't know why you sometimes drop back to failing to capitalize but it makes what you write much harder to read.)

I don't know either. I have noticed that I will often stop using capitals in parentheses, even if they contain multiple sentences or words that are supposed to be capitalized like "I". (you can see in the first parenthetical, and this one, missing capitalization, even though that first parenthetical in my previous comment is in a section of text where, otherwise, i was capitalizing.) I don't really care. I can capitalize when I want to impress people. Here I do not wish to impress. I want to filter people. If they can't look past some capitalization -- if they are shallow -- then let them dislike me and we'll go our separate ways quickly. You can, btw, looking through my history see that I've asked people tangential questions sometimes which might be taken as rude or aggressive. It's again for filtering purposes. I don't regard offending a portion of the people here as a bad thing, but a good thing. Then when a few people like me better and keep talking with me, my tone changes somewhat, and I'll write stuff like this which is more open, cooperative and non-confrontational. Then one thing that will happen is other people, who I didn't write this for, will jump in and find it arrogant, condescending, and so on. But I think you (JoshuaZ) might appreciate these remarks. No guarantees, but worth a try.

For example, if one doesn't good nutrition then people won't be as smart and so won't innovate. Similarly, if one doesn't have free time people won't innovate.

Where does free time come from? Where does better nutrition come from? Ideas.

Here's an example from BoI: llamas. South America had llamas. Why didn't they spread? Why didn't they get sold to distant towns, and bred to have more, and used to save tons of labor and create more free time? It's not for lack of suitable animals that people were doing more hand labor in some places than others. It's for lack of ideas.

Some technologies and cultural norms also reinforce innovation. For example, having a written language allows a much larger body of ideas, and having market economies gives market incentives to coming up with new technologies.

Yes, that's just my point. Things like written languages, technological ideas, and pro-progress cultural norms aren't natural resources provided by Nature. They are ideas people have. And they make all the difference.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-10T04:15:33.811Z · LW(p) · GW(p)

You can, btw, looking through my history see that I've asked people tangential questions sometimes which might be taken as rude or aggressive. It's again for filtering purposes. I don't regard offending a portion of the people here as a bad thing, but a good thing. Then when a few people like me better and keep talking with me, my tone changes somewhat, and I'll write stuff like this which is more open, cooperative and non-confrontational. Then one thing that will happen is other people, who I didn't write this for, will jump in and find it arrogant, condescending, and so on.

I find this attitude very surprising. Can you explain what it is that works for you about posting this way?

Replies from: curi
comment by curi · 2011-04-10T04:33:33.028Z · LW(p) · GW(p)

Gets rid of people I won't get along with quickly instead of slowly. Filters people.

It's similar to my attitude to small talk. Small talk conventions are designed, roughly, so that people can hold polite conversations no matter how much they disagree! That's not what I want at all. I want to find out if we disagree, find out if you are interested in cooperating with the real me, and sort through many people to find the ones who can do things like respond well under pressure rather than respond well to easy smalltalk, who can deal with disagreement well or agree with me, and so on.

There were a few people I inspired to flame me. I know I provoked them. I didn't actually do anything that deserves being flamed. But I broke etiquette some. It's not a surprising result. Flaming me for some of the things I did is pretty normal. (Btw a few of the flames were deleted or edited a bit after being posted.) Some people would regard that as disaster. I regard is as success: I stopped speaking to those people. If I'd been super polite they might have pretended to have a civil discussion with me for longer while having rather irrational thoughts going through their head. The more they hide emotional reactions (for example), while actually having them, the more discussion can go wrong for unstated reasons.

edit: maybe i should add that i think exceptional individuals are more worthwhile to talk to than mediocre ones. i'd rather have one person with some exceptional traits (even if he also has some exceptionally bad traits, btw. even if his average quality isn't good) than 20 average people who don't have much variance. one really good idea matters more than all the rest.

Replies from: Desrtopa, Swimmer963
comment by Desrtopa · 2011-04-10T14:33:22.581Z · LW(p) · GW(p)

There were a few people I inspired to flame me. I know I provoked them. I didn't actually do anything that deserves being flamed. But I broke etiquette some. It's not a surprising result. Flaming me for some of the things I did is pretty normal. (Btw a few of the flames were deleted or edited a bit after being posted.)

This reliably decreases your chance of changing minds and having your own mind changed. It creates an adversarial Us vs. Them mentality which limits limits the degree to which either of you is open to the other's arguments. Perhaps it doesn't feel to you like you're closing yourself off and making yourself less inclined to change your mind, but this happens to people quite reliably, and you strongly appear to be exhibiting it in your debates. You try to kick holes in the arguments of others, and not just reject the arguments but behave insultingly towards others for making them, when you could be asking "is there any reasonable way I could modify this argument so that it would retain the same point and not have this flaw?"

This behavior will tend to drive away people who're concerned with civility for its own sake, and people who're interested in fruitful debates that share meaningful ideas and change people's minds.

I have a record of online debates of comparable magnitude to your own, and one flaw that I have had to address in myself is the tendency to persist in hammering disagreements out ad nauseam. If you had visited the forum I frequented four years ago, the debate could have drawn out for days, and would almost certainly have been wasted, because we would both have walked away convinced that we won the argument having not changed our minds at all. The point of arguments is not to convince yourself you argued better, it's to see to it that people learn something and someone changes their mind, and if this doesn't happen, everyone involved loses. I have learned that arguing with people who demonstrate your conduct overwhelmingly tends to be a waste, which is why I'm no longer going to bother discussing Popper with you, but I am going to suggest that if you want to engage in fruitful debates, you should reconsider this approach.

Replies from: curi, curi
comment by curi · 2011-04-10T19:46:56.043Z · LW(p) · GW(p)

Two people I know, who do not write in the same style I used here lately, also have 0 karma.

http://lesswrong.com/user/mlionson/

http://lesswrong.com/user/brianScurfield/

Style complaints are a red herring; they are a way to complain and criticize independent of what the issues actually are. Downvotes happen across multiple styles. Respect the evidence.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-10T19:55:36.877Z · LW(p) · GW(p)

I haven't followed mlionson's comments, but Brian Scurfield was similarly downvoted for making erroneous arguments and not following up on requests to inform himself so he would be equipped to meaningfully participate in the discussions, and for unnecessarily promoting an Us vs. Them mentality, which has been explicitly noted in the responses to his comments as well as yours. There are other ways than rudeness to be downvoted, but this does not mean that rudeness does not encourage downvotes.

I and others have been quite willing to criticize your contributions on the basis of content, but your conduct has been such that people are increasingly deciding that it's not worthwhile. If you want your content to be addressed, signal that you are prepared to participate in a fruitful conversation.

Replies from: curi
comment by curi · 2011-04-10T20:20:23.160Z · LW(p) · GW(p)

Do you know of any published work by a Bayesian criticizing Popper, which you think is correct?

No one here posted any rigorous criticisms of Popper. They just complained about my summaries, being unaware of the published details they didn't yet understand. And I know how much you guys claim to like rigor, so there should be one, right?

comment by curi · 2011-04-10T19:15:31.616Z · LW(p) · GW(p)

FWIW I already knew everything you said here.

And yet I acted as I did anyway. For what I deem to be rational reasons. Which I knew in advance, and did not create afterwards as an excuse. And I also knew in advance that I could use other styles if I wanted -- I have done so and am in fact currently doing so at other places.

I wonder, how do you explain that? Do you think I might know something you don't? Do you think you might be wrong about some aspect of this?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-10T04:44:46.187Z · LW(p) · GW(p)

I suppose that makes sense if that's the way you view things. I happen to enjoy small talk, now that I'm good at it. I really value the ability to have conversations with people I disagree with, because the last thing I want to do at this point in my life is shut off my opinions to change. (This might have to do with my age: I am neither old enough, nor smart or experienced enough to be right all the time, or even most of the time, and I know that.)

And yeah, if I think the other person is wrong, I want them to change their mind...but being amenable to changing my own mind in response to their arguments (if valid) works better than upfront confrontation. (I try not to make this true of myself...I don't want to miss out learning about someone else's worldview just because they're more confrontational than I am.)

If I'd been super polite they might have pretended to have a civil discussion with me for longer while having rather irrational thoughts going through their hide.

Agreed. I guess a lot of the time, I want to have a civil conversation for longer because a) I enjoy civil conversation for its own sake, and b) eventually I'll notice that they're having irrational thoughts and emotional reactions, and if I want to I can ease that into the conversation without necessarily provoking a confrontation. (I am fairly good at this face-to-face, but the subtle emotional cues don't carry through to online posting so much, which might be why "discussion can go wrong for unstated reasons." Face-to-face, the unstated reason would still be noticeably, if you're looking for it, long before the actual confrontation.

I'm not saying there's necessarily anything wrong with your way of doing things...just that it wouldn't work for me, because I hate confrontation and I would regard being flamed as a disaster...I have this annoying tendency to care about anything that anyone says to or about me.

Replies from: curi
comment by curi · 2011-04-10T04:57:24.267Z · LW(p) · GW(p)

I happen to enjoy small talk, now that I'm good at it.

A cultural bias.

Well, sort of. It's genuinely useful for accessing some things our culture restricts access to. Like friends, good conversations (often people won't talk to you seriously, in person, at first, until they feel more comfortable with you. internet forums often do a good job of circumventing this though) or sex. It's a lot easier to get sex if you are good at small talk. And if you genuinely enjoy it, that helps even more. People like genuine conformists because they do a better job of conforming! (Usually. Faking it is so much harder, and takes way more skill.)

I really value the ability to have conversations with people I disagree with, because the last thing I want to do at this point in my life is shut off my opinions to change.

I'm not trying to filter by disagreement. I like to find people who agree because I could use more of those, and I do have enough access to people who disagree (it's no trouble at all to come here, or many other forums, and find people to disagree with me).

Talking to people I disagree with isn't so hard. I spend a lot of time debating with people who don't agree with me. And I can even be non-confrontational if I want to. Sometimes I go to new groups and just listen for a while to see what they are like without being disturbed. But I've been familiar with Less Wrong culture since before the Less Wrong website existed, so I'm not missing anything but interfering with the normal culture here (besides, if I want to know the normal culture, I can just go read the Sequences and other static content. or just stop posting and lurk on new threads.)

I want to I can ease that into the conversation without necessarily provoking a confrontation

Too much work to help one person, who probably doesn't want your help, and won't appreciate it, IMO.

I am fairly good at this face-to-face, but the subtle emotional cues don't carry through to online posting so much

I'm actually better at picking them up in text than IRL. It's a different skill. I practice it in text a lot. I've been known to, when I get bored with low quality content from people, start replying with little but psychological analysis of their posting. They'll usually reply a few times before they stop speaking to me, and I can get good feedback about how much of my initial guesses were correct.

I hate confrontation

You could change this. It's not human nature. It's not your genes. It's a cultural bias. A very common one. And it's important because criticism is the main tool by which we learn. When all criticism has to be made subtle, indirect, formal, filled with equivocation about whether the person stating it really means it, or various other things, then it slows down learning a lot.

I have this annoying tendency to care about anything that anyone says to or about me.

You know, Feynman had this problem. He got over it. Maybe reading his books would help you. One of them is titled like "What do you care what other people think?"

Not caring isn't just his advice. The title is something his first wife often said to him, because he had a problem with it. She kept reminding him. He got better at it eventually. It wasn't easy but he did it.

but being amenable to changing my own mind in response to their arguments (if valid) works better than upfront confrontation.

I am open. One thing is you're seeing is me after 10 straight years of online debate. It's gotten to the point that I rarely am told any argument I don't already know by a stranger. Early on I changed my mind a ton. It got gradually less frequent. I like to be wrong, I like to concede debates. I enjoy conceding. I'm tired of not losing debates; it's dull and I learn less. It's so much fun to be like, "Oh I get it now! That's even better than what I used to think!" But, well, there's no easy solution to getting more of that.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-10T13:11:54.903Z · LW(p) · GW(p)

I like to find people who agree because I could use more of those.

What is it about your beliefs that so many people disagree with? I haven't seen anything particularly extreme so far.

I'm tired of not losing debates; it's dull and I learn less. It's so much fun to be like, "Oh I get it now! That's even better than what I used to think!" But, well, there's no easy solution to getting more of that.

I have many years to go before I run up against this problem...and I probably never will entirely, since I suspect much of the LW community is genuinely smarter than me. I agree that the feeling of suddenly grasping a new solution is awesome and it sucks not to have it, but I can't suggest anything other than reading a ton of books on stuff you don't know yet...which might be hard to find if your general knowledge is already at a high level.

You could change this. It's not human nature. It's not your genes. It's a cultural bias.

Considering that I was raised in pretty much the same environment as my brother and sister, I think there's got to be some genetic influence on why my personality is so drastically different. On the Big Five standardized personality test, I score high on Agreeableness and Conscientiousness and low on Extroversion (whether or not that means anything is another question...) and I doubt I can change that.

In a lot of ways I'm a non-conformist; I stick to my own routine even if it makes me stand out from the crowd of people my age. For example: I don't wear makeup, I don't shave my armpits, I buy my clothes at Value Village, etc. People do make comments about these things, and I really and honestly don't care what they think. I do care if I hurt people's feelings. Considering that I'm studying to be a nurse, a field where empathy is essential, I don't want to change that. Another thing I know about myself is that I have trouble acting differently in different circumstances, partly out of a stubborn belief that I shouldn't have to. I don't want to train myself to be less sensitive only to find that I treat my patients insensitively. And I don't have problems online anyway...I do frequently disagree with people, and my agreeableness instinct just kicks in and helps me phrase it in a way that isn't going to antagonize the person before they even get to my point. I really hope there are people on LW who are mature enough to look past the way something is phrased, but I don't know so I don't take the risk.

I'm actually better at picking them up in text than IRL. It's a different skill. I practice it in text a lot. I've been known to, when I get bored with low quality content from people, start replying with little but psychological analysis of their posting.

What's your psychological analysis of my comments??? I'm serious, I'm actually really curious. This is valuable knowledge about myself that I want. And yeah, I can see that 10 years of online debate would make you really good at seeing through to the emotions behind the text.

Replies from: curi
comment by curi · 2011-04-10T18:22:21.703Z · LW(p) · GW(p)

What is it about your beliefs that so many people disagree with? I haven't seen anything particularly extreme so far.

Oh there's various things, but the main issue is people just plain don't already know stuff (like Popper's philosophy) and learning a lot of material is a big challenge most people won't approach. Not knowing stuff leads to many disagreements with all the ideas they don't.

It's not exactly their fault not to already know a lot. I don't usually expect to find people who already do (though someone who had already read, say, all of Popper's books would certainly be possible to run into). The key issue for me is their attitude to changing this. Learning a lot is a big project. One has to have patience and tolerance for disagreeing. For example, one has to react rationally to new ideas that he misunderstands or misreads rather badly. He needs to get the misunderstanding sorted out instead of get offended. If he doesn't, he's going to misread something sooner or later and give up.

One thing I've noticed is a lot of people refuse to ask questions. They don't know what I mean, and they won't ask, they just argue with a (pretty silly) misconception of what I mean (usually based on what many people in our culture would mean, and ignoring that there's a few contradictions between what I literally said and their interpretation). Conversations without questions usually don't go anywhere good.

On the other hand, a lot of people react badly to questions. I'll often not know quite what someone meant, or think there is some ambiguity, and ask them to clarify, or say more. Lots of people don't like that and won't give good answers -- like, often they will just start talking but not directly engage with specifically what question you asked. Another common reaction, once conversations have been going a while, is "i already answered that" with no quote or link. Some people think my questions are hair splitting and won't answer -- they don't have an attitude of wanting to improve one small step at a time (Popperian piecemeal, gradual improvement). Another common result of asking questions is people are in the mindset of arguing (not explaining) and so they will keep trying to argue with me. And since I'm asking questions, not expressing a position, they will have to make rather wild guesses and assumptions about what my position is and argue with that...

When people don't agree with me on issues like the right attitude to questions, and in general what a rational discussion consists of, and how much time and effort one should put into learning over a long period, and what are good criteria for giving up on someone and losing patience, then it's hard.

What's your psychological analysis of my comments?

You seem pretty culturally normal so far, except without saying anything ridiculously dumb in the first 5 minutes (which is perhaps more common. so, maybe you're better than average. for the self-selected group of non-lurkers on public internet discussion places. and the non-lurker group is already better than average, i think). Nothing much jumped out at me.

I could say something like you have good empathy skills since you were thinking about what I was saying and why, which most people here haven't really done. Maybe that would sound like convincing psychological analysis. But I don't really know if it's true. The same behavior could be explained by good rationality skills. Or by getting lucky -- maybe you have a bunch of buttons to push but happened not to read my comments that would have annoyed you.

My psychological knowledge is more focussed on what I actually use: noticing stuff relevant to some argument. It's not exactly personality analysis in the way those personality tests do it. You seem pretty calm so far, no big danger signs, though it's hard to tell if you'll continue replying much. It's hard to explain why I have some doubt there. A lot of agreeable people don't like to push issues into too much depth to the point of bringing out disagreements and then discussing them.

Just checked your karma though. With that much you must discuss a fair amount, unless you're account is really old or you're good at writing popular top level posts that get 10 points per vote. That's something I have less experience with. Usually it's the confrontational people who get in arguments and post a ton.

One of my least favorite things about most of my friends is they don't reply very much to stuff they agree with. If you post something dumb most of them can argue with it. They can talk with idiots quite well. But post something high quality and many usually don't discuss in any way at all. I figure they should have options. Too advanced for them? Ask a question. Too simple? Post a further implication I left out. Exactly on their level? Elaborate on a tangent, or explain it in their own words to get a better grasp on it. When I try explaining this issue itself, I get few to no replies.

Do you know anything about that issue?

There was your comment:

suspect much of the LW community is genuinely smarter than me

This kind of humility can be a virtue. But, if this and your other comments about wanting to learn and be open minded are representative, it easily puts you in the top 20%, especially counting lurkers. Maybe far higher.

There's some dangers here. I think it's literally a false statement (though it could be the case that you have less math knowledge than the average person here, or something. But less pre-existing knowledge is different than being less smart which is more about attitudes to learning and some non-subject-specific stuff.) When people say false things, it can be revealing. Do they want to believe it? Are they under pressure to believe it? Maybe you think that kind of statement makes you a good person. Maybe you have the common psychological attitude where people think "I'm no one special. Not very important. My arguments can be sloppy since I'm no expert and not expected to be. I won't and don't have to meet world class standards. I won't pursue a project of trying to get to the top since that's not me." I'm not especially suggesting this is accurate. I don't see enough evidence to rule out other possibilities. With a lot of people guessing very culturally normal flaws is really reliable. But since you're reacting to me somewhat better than most people, so far, and haven't said a bunch of false stuff, I'm less inclined to assume a bunch of flaws.

Considering that I was raised in pretty much the same environment as my brother and sister, I think there's got to be some genetic influence on why my personality is so drastically different.

This is not a precise statement. You were not raised in "pretty much the same environment". You were raised in an environment sharing some common features at a high level. There were also many, many subtle differences. As William Godwin pointed out, if you go to a meadow with your sibling, you'll be standing in different places and thus get different visual input. Another factor is that parents in our culture often have different attitudes to first children vs later children.

You may be making an assumption like, "small differences in environment probably don't matter much". But they can snowball if they start at a very young age. There can be feedback loops. A small difference in environment creates a small difference in you. That small difference in you inspires a small difference in your parent's parenting behavior. That small difference in parenting behavior causes another small difference in you. Which causes another small change in parenting behavior. And so on.

I doubt I can change that

I think this kind of thing (combined with your attitude to genetic traits) is a common attitude here. But having investigated the field, basically none of the science for it is correct. Most is blatantly irrelevant: not capable of reaching the conclusions it purports to reach based on the evidence it purports to be using. Would you be interested in discussing that? If so I would suggest either you post what you think is a good argument (be it a cite of a study, or something else). Or if you prefer, you read and comment on this: http://cscs.umich.edu/~crshalizi/weblog/520.html

For example: I don't wear makeup

Oh you're a girl? I hadn't noticed lol (I was reading your comment partially out of order and just got here). I wonder if there was any evidence in your previous discussion with me that should have tipped me off. Girls in our culture are under pressure to be less ambitious and not too smart, and more non-confrontational. And to have more empathy. Maybe I should have taken those as evidence, but they're all pretty common with men too.

I really hope there are people on LW who are mature enough to look past the way something is phrased, but I don't know so I don't take the risk.

I tested that some. Results not promising. But anyway this reminds me of an important issue. Most conformists conform more than necessary. If you really want to get to the very top of a social hierarchy, over achieving can be good. But if you want to do enough to fit in, but would also like the maximum risk-free freedom, then it's important. The reason they do more than necessary is they never test where the borderline is. If they found it's 200 units away, then maybe they could go 100 units closer with plenty of margin for error. You have to sometimes offend people to find out where the limits are (or watch someone else test it).

Replies from: Yvain, Swimmer963
comment by Scott Alexander (Yvain) · 2011-04-10T22:40:22.261Z · LW(p) · GW(p)

Oh there's various things, but the main issue is people just plain don't already know stuff (like Popper's philosophy) and learning a lot of material is a big challenge most people won't approach. Not knowing stuff leads to many disagreements with all the ideas they don't.

Have you considered writing posts about it? So far most of your posts have been about why Popper's philosophy is great, not about exactly what it is. A good introduction to Popperian philosophy would be less controversial and more useful.

Replies from: prase, curi
comment by prase · 2011-04-11T00:19:32.510Z · LW(p) · GW(p)

I am afraid it wouldn't work, at least for me. First because I am probably already biased against curi and perhaps even against Popper due to the style of the recent debates, and second because I don't believe that curi represents Popper's philosophy accurately. Still, I would like to read a post written by someone who understands Popper explaining what his Critical Rationalism is in detail. If curi wants to write it, he'd better wait some time until emotions evaporate and create a new account for that opportunity and completely change his attitude to discussion. This is not likely to happen, too, at least if curi's statement about his being rude on purpose to filter out people he "can't use" is to be taken seriously.

But someone knowledgeable of Popper should definitely write about it to settle this thing for good.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-11T00:21:59.110Z · LW(p) · GW(p)

Is lukeprog familiar with Popper? I think he's the most likely here to have the background for it, but expect that whatever plans he's already got lined up are more productive.

comment by curi · 2011-04-10T23:02:11.771Z · LW(p) · GW(p)

See:

http://fallibleideas.com/

Popper's books.

Bryan Magee's short introductory book on Popper.

David Deutsch's books.

My blog: http://curi.us/

Of course I've considered writing more things. I plan to.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-10T22:26:52.240Z · LW(p) · GW(p)

Oh there's various things, but the main issue is people just plain don't already know stuff (like Popper's philosophy) and learning a lot of material is a big challenge most people won't approach.

Guilty as charged. I couldn't tell you a single fact about Popperian philosophy, other than it being a controversy on LessWrong. In general, I find philosophy dense and difficult to understand (maybe because I think more concretely than abstractly) but if you could recommend a book or webpage that presents the ideas clearly with some concrete examples, I would love to check that out.

I think this kind of thing (combined with your attitude to genetic traits) is a common attitude here. But having investigated the field, basically none of the science for it is correct. Most is blatantly irrelevant: not capable of reaching the conclusions it purports to reach based on the evidence it purports to be using. Would you be interested in discussing that?

I would love to discuss that once I've had time to do the research...I'm at work right now and my break only lasts another 5 minutes, so I'll get back to you sometime tomorrow or the next day. This isn't a good week, I have 3 exams and a paper due, but I'll find time.

But, if this and your other comments about wanting to learn and be open minded are representative, it easily puts you in the top 20%, especially counting lurkers.

Open-mindedness and curiosity are one thing. Raw native intelligence is something different. I might be above average on the first two, but I expect I have less of the second that the average LWer. For example, I would love to understand the math of quantum mechanics, but it's hard for me and really learning it, if I decided to, would likely be a multi-year endevour. Same with computer programming...I would love to actually be able to do it, but it doesn't come super easily.

Got to go I have to go teach first aid to 13-year-olds! I'll reply to the rest of your comment later.

Replies from: curi, curi
comment by curi · 2011-04-11T19:58:35.529Z · LW(p) · GW(p)

I find philosophy dense and difficult to understand ... if you could recommend a book or webpage

In that case I'd suggest starting with:

http://fallibleideas.com/

(try it and see if the style/approach appeals to you, if not no worries) or

http://www.amazon.com/Popper-Modern-masters-Bryan-Magee/dp/0670019674

(This summary book on Popper is only 115 pages. The easiest to read book option.)

Open-mindedness and curiosity are one thing. Raw native intelligence is something different. I might be above average on the first two, but I expect I have less of the second that the average LWer. For example, I would love to understand the math of quantum mechanics, but it's hard for me and really learning it, if I decided to, would likely be a multi-year endevour. Same with computer programming...I would love to actually be able to do it, but it doesn't come super easily.

I think you're mistaking subject specific skills for raw native intelligence. Being good at math and programming isn't what intelligence is about. They are specific skills.

BTW I believe most educational material is quite bad and makes stuff far harder and more confusing than necessary. And for quantum physics in particular the situation is pretty terrible (if you want to learn it in depth; there's OK popular science books for a lower level of detail). The situation with programming is better: there's way more self taught programmers and more non-academic efforts to try to create material to help people learn programming, which I think are often more successful than the stuff schools put out.

I would equate intelligence with basically how good one is at learning in general, without giving priority to some fields. I think open mindedness and curiosity are crucial traits for that. A lot of people aren't much good at learning in general, but have a specific field or two where they do OK. They can be impressive because in the area where they are rational they gain a lot of expertise and detailed knowledge. But I don't regard them as more intelligent than more broad people.

You find math hard to learn. But most mathematicians find various things hard to learn too, such as (commonly) social skills. Most people are more impressed by math knowledge than social knowledge because it's more common. Most people learn social skills, it's nothing special. Yet that doesn't really imply math is harder. More people try hard to learn social skills. And more people are alienated from learning math, at a young age, by their teachers (especially females).

Whatever topics one is bad at learning, I don't think it's normally caused by intelligence itself. I think raw native intelligence is itself a misconception and that the hardware capabilities of people's brains don't vary a lot and the variance doesn't have much practical consequence. Rather, I think what people call "intelligence" is actually a matter of their philosophical theories and rationality, especially either general purpose ideas (which allow one to be good at many things) or ideas in specific fields people are impressed by (e.g. math).

What I think causes people to have trouble with math, or social skills, or other things, besides the inherent difficulty of the subjects, is irrationalities, caused largely by external pressure and cruelty. Those people who have trouble learning social skills were teased as children, or had trouble finding friends, or something. They did not try to learn to interact with others in an environment where everyone was nice to them, and they could fail a bunch of times with no harm coming to them, and keep trying new things until they got it. With math, people are forced to do things they don't want to like unpleasant math homework and math tests. They don't get to learn at their own pace for their own intrinsic motivations. This commonly alienates people from the subject. Causes like these are cultural.

Replies from: Jonathan_Graehl, Swimmer963
comment by Jonathan_Graehl · 2011-04-12T00:47:21.098Z · LW(p) · GW(p)

Rather, I think what people call "intelligence" is actually a matter of their philosophical theories and rationality, especially either general purpose ideas (which allow one to be good at many things) or ideas in specific fields people are impressed by (e.g. math).

Have you looked at the evidence that this is false? Or is your belief not falsifiable? :)

Replies from: curi
comment by curi · 2011-04-12T01:05:16.389Z · LW(p) · GW(p)

It is primarily a philosophical belief. It can be falsified by criticism. It could in theory be falsified using scientific tests about how brains work, but technology isn't there yet. It could also in theory be falsified if, say, people were dramatically different than they are. But I'm not relying on any special evidence in that regard, just basic facts of the world around us we're all aware of. (For example, people commonly hold conversations with each other and partially understand each other. And then learn new languages. And children learn a first language. And so on. These things contradict some views of the mind, but they also allow for many including mine.)

BTW Popper never said all ideas should be (empirically) falsifiable. That's a myth (which you didn't say, but perhaps hinted at, so worth mentioning). He said that if they can't be then they aren't science, but he did not intend that as an insult, and he himself engaged in a lot of non-science.

In some special cases, saying something is non-science is a good criticism. Those cases are when something claimed to be science as part of its argument for why its right, and part of its way of presenting itself. If it claims to be science, but isn't, that's a problem. Popper's favorite examples of this were ideas of Marx, Freud and Adler, which made specious claims to scientific status.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-04-12T02:57:25.071Z · LW(p) · GW(p)

You're right - I was only teasing, except that I think there is plenty of suggestive evidence for a meaningful innate G (even though it's a sum of various types of health, and not only genetic, much less the sum of just a few SNPs). I was thinking of falsifiability because it seems to me that you'd say in response to any study that seems to segregate people by G and measure their outcomes later, you'd just say "they were already on the path toward having a sane+rational set of beliefs+practices".

I've held a tentative version of your view (that nearly anyone could in principle learn to be smart) in the past. I've moved away from it as I've read more, but I still think there's a great deal of difference in ability to observe or judge truth, at equal native mental talent, between someone with a workable set of beliefs and skills, and someone who's tied to enough screwed-up beliefs and practices. (probably everyone sees this)

Your unusual behavior at first made me underestimate your competence. My heuristics usually save me a great deal of time, so I won't apologize for them, but it was diverting having them tested.

I've read a single book of Popper's (something like Open Society + its Enemies) and took away from it that he was smart and disliked Plato. So I don't think I understand what it is you like about him, or why it would be useful for me to know more of what he wrote.

Replies from: curi
comment by curi · 2011-04-12T03:26:27.776Z · LW(p) · GW(p)

I would also say that measuring outcomes is a hard issue -- e.g. you have to decide what is a good outcome. And all sorts of stuff interferes. Some people are too smart -- in a sense -- which can lead to boredom and alienation because they are different from their peers. There may be a sweet spot a little above average but not too far. Sometimes really exceptional people have exceptional outcomes, but sometimes not. I wouldn't predict in advance that the smartest people will have the most successful outcomes, by many normal measures of good outcomes.

There's a saying: The B students work for the C students. The A students teach.

The first thing I'd want to know about any potential study is basically: what are you going to do and why will it work? They need philosophical sophistication to avoid all kinds of mistakes. Which is just what the Conjunction Fallacy papers lack, as well as, e.g., many heritability papers.

I've read a single book of Popper's (something like Open Society + its Enemies) and took away from it that he was smart and disliked Plato.

That must have been volume 1 only. Volume 2 criticizes Marx and Hegel.

Popper's biggest strength is his epistemology. He solved the problem of induction, identified and criticized the justificationist tradition (which most people have been unconsciously taking for granted since Aristotle), and presented a fallibilist and objective epistemology, which is neither authoritarian nor skeptical, and which works both in theory and practice. His epistemology also integrates well with other fields -- there are interesting connections to physics, evolution, and computation (as discussed in Deutsch's book The Fabric of Reality), and also to politics, education, human relationships (in the broadest sense; ways people interact, cooperate, communicate, etc) and morality.

A good place to start reading Popper is his book Conjectures and Refutations. It is a collection of essays, the first of which of which is long and covers a lot of epistemology.

Another good place to start is Bryan Magee's short book on Popper. And another is David Deutsch's books which explain epistemology and many other things.

My heuristics usually save me a great deal of time, so I won't apologize for them

Yes I know what you mean. I'm sure I dismiss some people who are worthwhile (though I use rather different heuristics than you, and I also tend to give people a lot of chances. One result of giving lots of chances is I can silently judge people but then see if my judgment was wrong on the second or third chance). I think the important things are that you have some ability to recognize when they may not be working well, and that after they fail in some respect you look for a way to change them so they don't make the same mistake again. Changing them not to repeat a mistake, while still saving lots of time, can be hard, but it's also important.

One thing about G is that it's extremely difficult to disentangle parenting factors. When you intelligence test people at age 8, or 12, or 20, they've already had years and years of exposure to parenting, and often some school too. That stuff changes people, for better or worse. So how are you to know what was innate, and what wasn't? This is a hard problem. I don't think any experimental social scientists have solved it. I do think philosophy can address a lot of it, but not every detail.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-04-12T08:05:42.228Z · LW(p) · GW(p)

One thing about G is that it's extremely difficult to disentangle parenting factors

Right. Thus the obsession with twin studies.

As for your complaint about lack of (philosophical) rigor on the part of psychologists and other scientists, I'm often shocked at the conclusions drawn (by motivated paper authors and hurried readers) from the data. In theory I can just update slightly on the actual evidence while not grasping the associated unproven stories, but in practice I'm not sure I've built a faithful voting body of facts in my brain.

Thanks for the Popper+Deutsch recommendations.

Replies from: curi
comment by curi · 2011-04-12T08:55:18.095Z · LW(p) · GW(p)

Thus the obsession with twin studies.

But they do not solve the problem. The only seem to at low precision, without much rigor. They are simplistic.

For example, they basically just gloss over and ignore the entire issue of gene-meme interactions, even though, in a technical and very literal sense, most stuff falls under that heading.

What basically happens -- my view -- is genes code for simple traits and parents in our culture react to those different traits. The children react to those reactions. The parents react to that new behavior. The children react to that. The parents react to that. And so on. Genetic traits -- and also trivial and, for all intents and purposes, random details -- set these things off. And culture does the rest. And twin studies do not rule this out, yet reach other conclusions. They don't rule out my view with evidence, nor argument, yet somehow conclude something else. It's silly.

Sometimes one gets the impression they've decided that if proper science is too hard, they are justified in doing improper science. They have a right to do research in the field! Or something.

Disagree? Try explaining how they work, and how you think they rule out the various possibilities other than genetic control over traits straight through to adulthood and independent of culture.

There's other severe methodological errors too. You can read some here: http://cscs.umich.edu/~crshalizi/weblog/520.html

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-12T02:56:13.215Z · LW(p) · GW(p)

I would equate intelligence with basically how good one is at learning in general, without giving priority to some fields. I think open mindedness and curiosity are crucial traits for that.

Maybe I was above average in, say, my high school graduating class, but I doubt that is true of the Less Wrong community. People wouldn't be here if they lacked that degree of open-mindedness and curiosity.

They don't get to learn at their own pace for their own intrinsic motivations. This commonly alienates people from the subject. Causes like these are cultural.

Would you like to comment on how non-Western cultures view math differently? Or offer a suggestion as to why I was the only white girl in my high school calculus and vectors class? (I like math a lot...it's just that most people who like math like it because they're good at it, so the only people who want to talk to me about math and how awesome/fascinating it is are usually massively better at it than I am, which may be why I perceive myself as not being good at it.)

I do have stubbornness, which can be an advantage to learning new things (I spent 8 years teaching myself to sing, and went from complete tone-deafness to composing my own piano and vocal pieces and performing moderately difficult solos.) I am also stubbornly loyal to prior commitments, which basically means that once I start doing something I never stop...after awhile this limits my ability to start new things. (I can't teach myself quantum mechanics while I'm working 2 jobs, singing in a church choir, and going to school full-time.)

And for quantum physics in particular the situation is pretty terrible (if you want to learn it in depth; there's OK popular science books for a lower level of detail).

Agreed! I ran into exactly this problem; I've read enough pop science books that I no longer learn anything new from them, but when I took a textbook out of the university library, I took one look at the first page and was lost. Eliezer's intro to quantum mechanics would probably help, if I made the commitment to go through it entirely and practice all the math, but again, not something I can do very easily on my breaks at work.

Replies from: curi
comment by curi · 2011-04-12T04:07:30.257Z · LW(p) · GW(p)

People wouldn't be here if they lacked that degree of open-mindedness and curiosity.

Some might. Joining might make them feel good about themselves, and help them feel open minded.

Would you like to comment on how non-Western cultures view math differently?

I don't know a lot. Asian cultures value school highly, and value math and science highly, and pressure children a lot. Well, actually I only know much about Japan, South Korea and China. The school pressure on children in Japan itself is much worse than the well known pressure on asian children in the US, btw.

Or offer a suggestion as to why I was the only white girl in my high school calculus and vectors class?

Culture. Beyond that, I don't know exactly.

it's just that most people who like math like it because they're good at it

I think cause and effect goes the other way. Initially, some people are more interested in math (sometimes due to parental encouragement or pressure). Consequently, they learn more of it and get a lead on their peers. This can snowball: they do well at it relative to their peers, so they like it more. And the teacher aims the material at the 20th percentile student, or something (not 50th percentile because then it's too hard for too many people). Result: math class is pretty hard for people in percentiles 5-90, who might not be very far apart in skill. And they don't like it. A few fail and hate it. And the ones with the early lead never have the experience, at least until college, of math being hard.

I do have stubbornness, which can be an advantage to learning new things (I spent 8 years teaching myself to sing, and went from complete tone-deafness to composing my own piano and vocal pieces and performing moderately difficult solos.)

Perhaps this persistence and patience is a way in which you are smarter than many Less Wrongers.

I am also stubbornly loyal to prior commitments

Be careful with this. I'm not entirely sure what you mean by a commitment, but for example I think it's important to be willing to stop reading a book in the middle if you don't like it. If it's not working, and there's no particular reason you need to know the contents of this book, just move on! Some people have trouble with that. There's also the sunk cost fallacy that some people have trouble with.

I've read enough pop science books that I no longer learn anything new from them

David Deutsch says there is no very good way to learn quantum mechanics, currently. Also that it's one of the simpler and more important areas of physics, when presented correctly.

I believe the best serious physics books are Feynman's lectures (that's physics in general. I think there's quantum stuff towards the end which I haven't read yet.). But they are hard and will require supplementary material. If one finds them too hard then they're probably not best for that person.

For pop science books, you might take a look at Deutsch's books because I believe they offer some unique ideas about physics not found in other popular science books. By focussing on the Many Worlds Interpretation, he's already different than many books, and then he goes further by offering his unique perspective on it, including concepts like fungibility. And he relates the ideas to philosophy in very interesting ways, as well as explaining Popperian philosophy too (he is the best living Popperian).

I like Feynman's pop science books a lot too, and he does go into quantum physics in some. I don't know how unique those are, though.

I glanced at Eliezer's physics posts. Looks strongly pro-Many Worlds Interpretation which is a good sign.

I tried reading the Uncertainty Principle essay. It looks confusing and not very helpful to me. Which is a bad sign since I already know stuff about that topic in advance, so it should be easier for me to follow. It appears to be going into a bunch of details when there's a simpler way to both explain and prove it. Maybe he's following in the (bad) tradition of some physics book he read about it.

It's hard to tell because it kind of meanders around a bunch, and certainly some specific statements are correct, but I don't think Eliezer understands the uncertainty principle very well. e.g. he wants to rename it:

Heisenberg Certainty Principle

But that doesn't make sense to me. It's a logical deduction from the laws of physics about how when some observables are sharp, others must not be sharp (math proves this). Sharp means "the same in all universes".

Here's a quote from The Beginning of Infinity by David Deutsch, terminology section:

Uncertainty principle: The (badly misnamed) implication of quantum theory that for any fungible collection of instances of a physical object, some of their attributes must be diverse.

This is hard to understand out of context, but it basically means if you consider all the versions of something in different universes, say a cup of coffee, and you consider the observable attributes of them (like temperature of the coffee), some observables are different in different universes. They can't all be the same in all universes.

How you get from there to a certainty principle I don't know.

Eliezer uses difficult language like "Amplitude distributions in configuration space evolve over time" which I don't think is necessary. For one thing, in my understanding, the wave function is a function over configuration space and that's the Schrödinger picture. But it's easier to understand quantum physics using the Heisenberg Picture instead which focusses on observables.

comment by curi · 2011-04-10T22:38:19.744Z · LW(p) · GW(p)

This isn't a good week, I have 3 exams and a paper due, but I'll find time.

There's no hurry. I might stop checking this website, but I'll be happy to continue the discussion any time if you email me curi@curi.us

It would be fine if you were busy and delayed for a month, or whatever. No big deal.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-11T15:22:58.692Z · LW(p) · GW(p)

You seem pretty calm so far, no big danger signs, though it's hard to tell if you'll continue replying much. It's hard to explain why I have some doubt there. A lot of agreeable people don't like to push issues into too much depth to the point of bringing out disagreements and then discussing them.

I would like to continue this conversation. It's awfully nice to be discussing with someone and have them post a comment the length of a short story full of points that, while I might not agree with them, are well-thought-out. And nothing you said has really annoyed me. Some of the things you say I wouldn't say, because a) that's just not my attitude to life, and b) I have no particular reason (yet) to try to filter who I talk to. However, I think I understand why you take that attitude, and it doesn't seem to have any negative consequences for your emotions, assuming you don't care that people comment on your reputation. (Wish I could cite that comment but I don't think I can find it again...)

Just checked your karma though. With that much you must discuss a fair amount, unless you're account is really old or you're good at writing popular top level posts that get 10 points per vote.

My account is about 3 months old. I would need to add it up properly, but for sure more than 3/4 of my karma comes from my top-level posts. I have a few (Being a Teacher and Ability to React, neither of them very controversial) that were upvoted more than I think they deserved (46 and 68 upvotes respectively, or around that) and the rest are between 10 and 20. My post Positive Thinking probably has the most comments of anything I've written...it's about the benefits of religious communities, which makes it fairly controversial here. I'm not good at writing controversial stuff (or writing non-fiction at all, really) but it's a nice feeling when you're 19 and feel kind of powerless in the world-at-large to see people replying to and discussing your ideas.

Replies from: curi
comment by curi · 2011-04-11T19:40:09.000Z · LW(p) · GW(p)

assuming you don't care that people comment on your reputation. (Wish I could cite that comment but I don't think I can find it again...)

I saw several comments like that. I vaguely recall replying to one asking what my reputation is, since he hadn't specified.

comment by JoshuaZ · 2011-04-10T02:16:33.128Z · LW(p) · GW(p)

Minor remark: Your essay about tradition is much more readable than a lot of the other material on your site. I'm not sure why but if you took a different approach to writing/thinking about it, you might want to apply that approach elsewhere.

Replies from: curi
comment by curi · 2011-04-10T03:59:28.487Z · LW(p) · GW(p)

I think the difference is you. I wrote that entire site in a short time period. I regard it as all being broadly similar in style and quality. I attempted to use the same general approach to the whole site; I didn't change my mind about something midway. I think it's a subject you understand better than epistemology directly (it is about epistemology, indirectly. traditions are long lived knowledge). The response I've had from other readers has varied a lot, not matched your response.

I do know how to write in a variety of different styles, and have tried each in various places. The one I've used here in the last week is not the best in various senses. But it serves my purpose.

comment by Desrtopa · 2011-04-10T16:05:05.605Z · LW(p) · GW(p)

The first example that comes to mind for me is the collapse of the Roman empire. The Romans might have been "bad", being aggressive and expansionist, but the people they fell to were markedly worse from the perspective of truth seeking and pursuit of enlightenment, the standard Deutsch and curi are applying, and their replacements ushered in the Dark Ages.

Replies from: Randaly
comment by Randaly · 2011-04-10T20:25:58.665Z · LW(p) · GW(p)

But different conditions hold today. The Gothic armies were virtually identical to the armies of the earlier Celts/Gauls who the Romans had crushed; even the Magyars (~1500's CE) used more or less the same tactics and organization as the Cimmerians (~ 700 BCE), though they did have stirrups, solid saddle trees, and stiff-tipped composite bows. Similarly, IIRC, the Roman armies didn't make use of any major recent technological innovations. This no longer holds today; the idea of an army using technology hundreds of years old being a serious military threat to any modern nation is frankly ludicrous. Technological and scientific development has become much, much more important than it was during Roman times.

(And, btw, it's not really accurate to say that, in practice, the barbarians were all that much much worse than the Romans in terms of development and innovation; technological development in Europe didn't really slow down all that much during the Dark Ages and the Romans had very few scientific (as opposed to engineering) advances anyways- most of their scientific knowledge (not to mention their mythology, art, architecture, etc.) was borrowed from the Greeks.)

Replies from: Desrtopa
comment by Desrtopa · 2011-04-10T20:31:54.874Z · LW(p) · GW(p)

Yes, but the culture of enlightenment and innovation within Greek and Roman culture had already been falling apart from within. The culture of Classical Antiquity was outcompeted by less enlightened memes.

Replies from: Randaly
comment by Randaly · 2011-04-10T21:27:11.783Z · LW(p) · GW(p)

How so? I'm not sure when, specifically, you're talking about, but the post-expansion Roman Empire still produced such noted philosophers as Marcus Aurelius, Apuleius, Boethius, St. Augustine, etc.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-10T22:46:10.104Z · LW(p) · GW(p)

I'm thinking of the decline of Hellenist philosophy, especially the mathematical and empirical outlooks propounded by those such as Hypatia.

Replies from: Jayson_Virissimo, Randaly, Vladimir_M
comment by Jayson_Virissimo · 2011-04-11T19:13:01.940Z · LW(p) · GW(p)

I'm thinking of the decline of Hellenist philosophy, especially the mathematical and empirical outlooks propounded by those such as Hypatia.

As far as I know, Hypatia was a Neoplatonist like Saint Augustine. What evidence do you know of that she had an empirical outlook?

Replies from: Desrtopa
comment by Desrtopa · 2011-04-11T19:37:25.958Z · LW(p) · GW(p)

That was a position she had attributed to her in a book in which I first read about her; I no longer remember the details and may have been mistaken.

In any case, the development of new technology and naturalistic knowledge based on empirical investigation and mathematics declined in the Dark ages. Whether I was mistaken about Hypatia's position in particular or not doesn't change the issue of whether an inferior tradition of intellectual investigation replaced a superior one.

Replies from: Vladimir_M, Jayson_Virissimo
comment by Vladimir_M · 2011-04-12T02:40:53.467Z · LW(p) · GW(p)

[An empirical outlook] was a position she [Hypatia] had attributed to her in a book in which I first read about her; I no longer remember the details and may have been mistaken.

Was it by any chance Cosmos by Carl Sagan? His treatment of the topic is complete nonsense. (I understand Sagan is held in some respect by many people here, but he definitely wasn't above twisting facts and perpetuating myths to advance his agenda.) A good debunking of the whole "Hypatia as a rationalist martyr" myth can be found on Armarium Magnum.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-12T03:37:54.642Z · LW(p) · GW(p)

I'm pretty sure I've never read Cosmos, so no, I don't think so. If it's a myth, he's not the only one perpetuating it.

Replies from: None
comment by [deleted] · 2011-04-12T03:41:51.468Z · LW(p) · GW(p)

Read Cosmos? Once again I feel antiquated.

comment by Jayson_Virissimo · 2011-04-11T20:25:26.049Z · LW(p) · GW(p)

That was a position she had attributed to her in a book in which I first read about her; I no longer remember the details and may have been mistaken.

In that case, I won't update my beliefs. Was that from a blurb in a science textbook by chance? I too have been the victim of false history from my science textbooks.

In any case, the development of new technology and naturalistic knowledge based on empirical investigation and mathematics declined in the Dark ages.

What time period are you referring to when you use the term Dark Ages? If you are referring to the Middle Ages, then I disagree that it is an example of a time when a superior intellectual tradition was replaced by an inferior one (at least in terms of natural philosophy/science).

Replies from: Desrtopa
comment by Desrtopa · 2011-04-11T20:40:14.816Z · LW(p) · GW(p)

It was a history book (popular, not academic,) and it's certainly possible that it was mistaken.

The limits of the Dark Ages are a matter of historical dispute, but for the purposes of this discussion, I suppose we could say about 5th to 11th century CE in Europe.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-04-12T00:31:56.597Z · LW(p) · GW(p)

The limits of the Dark Ages are a matter of historical dispute, but for the purposes of this discussion, I suppose we could say about 5th to 11th century CE in Europe.

I agree that the Dark Ages had an inferior intellectual tradition than the Hellenistic Period, since the dates you stipulated would exclude Aquinas, Ockham, and Scotus. On the other hand, I am at a loss trying to think of 11th century technologies that weren't equal to or superior than their 4th century counterparts.

comment by Randaly · 2011-04-11T16:34:50.757Z · LW(p) · GW(p)

Well of course the previously dominant branch of philosophy declined- that happens all the time in philosophy. But I don't think that there's grounds for proclaiming Hellenist philosophy to be significantly better than its successors: it was hardly empirical (Hypatia herself was an anti-empirical Platonist) and typically more concerned with e.g. confused explanations of the world in terms of a single property (all is fire! no, water!) or confusion regarding words (e.g. the Sorites paradox) than any kind of research valuable/relevant today.

And the group which continued the legacy of Hellenist/Roman thought, the Islamic world, did in fact continue and, IMHO, vastly augment the level of empirical thought; for example, it's widely believed that the inventor of the Scientific Method was an Arab scientist, Alhazen. Even though Europe saw a drop in learning due to the collapse of the unsustainable centralized Roman economy and the resulting wars and deurbanization, all that occurred was that its knowledge was passed onto new civilizations large/wealthy/secure enough to support science/math/philosophy. (Specifically, Persia and Byzantium, and later the Caliphates.)

Replies from: Desrtopa
comment by Desrtopa · 2011-04-11T16:45:58.633Z · LW(p) · GW(p)

The technological and empirical tradition of Islam pretty much died out due to the success of The Incoherence of the Philosophers though. My point is that innovative and empirical traditions have given way in the past to memetically stronger anti-innovative traditions. That doesn't mean that the same will happen to present day scientific culture, I highly doubt that would happen without some sort of catastrophic Black Swan event, but innovative traditions have not historically consistently beaten out non innovative ones.

Replies from: Randaly, Eugine_Nier
comment by Randaly · 2011-04-12T05:41:17.688Z · LW(p) · GW(p)

But there were still significant Islamic achievements in science after The Incoherence of the Philosophers was published- e.g. Ibn Zuhr's experimental scientific surgery, Ibn al-Nafis's discovery of pulmonary circulation, etc. And The Incoherence of the Philosophers probably didn't have much of an impact, at least immediately, on Islamic science- Al-Ghazali only critiqued Avicenna's philosophy, while expressing support for science.

I think a more persuasive reason for the decline of Islamic science is the repeated invasions by outsiders (Crusaders, Mongols, Beduins, and the Reconquista, plus the Black Plague), which pretty much ended the golden age of Islamic civilization. But today, as I said earlier, there are no powerful yet unknown barbarian hordes around today.

(Though yes, I agree wrt Black Swans like the Black Plague.)

comment by Eugine_Nier · 2011-04-12T02:10:09.981Z · LW(p) · GW(p)

I think this is caused by the fact that innovative societies are that way because their more open to new ideas. But being open to new ideas means that your memetic defenses are by definition weaker.

Notice also that innovative societies generally aren't defeated until they stop innovating.

comment by Vladimir_M · 2011-04-12T02:20:21.086Z · LW(p) · GW(p)

The "Hypatia as a rationalist hero" trope is one of those awful historical myths that just refuse to die out. Armarium Magnum has a detailed debunking of the story.

comment by Vladimir_M · 2011-04-12T01:38:54.021Z · LW(p) · GW(p)

Similarly, in WW2, Japan did quite well for itself, and if a handful of major battles had gone slightly differently, the outcome would have been very different.

You are wrong about this. Even if every single American ship magically got sunk at some point in 1941 or 1942, and if every single American soldier stationed outside of the U.S. mainland magically dropped dead at the same time, it would only have taken a few years longer for the U.S. to defeat Japan. Once the American war production was up and running, the U.S. could outproduce Japan by at least two orders of magnitude and soon overwhelm the Japanese navy and air force no matter what their initial advantage. Starting the war was a suicidal move for the Japanese leadership, and even the sane people among them knew it.

I think you're also overestimating the chances Germans had, and underestimating how well Hitler did given the circumstances, though that's more controversial. Also, Germany lost the technological race in pretty much all theaters of war where technology was decisive -- submarine warfare, cryptography, radars and air defense, and nuclear weapons all come to mind. The only exceptions I can think of are jet aircraft and long-range missiles, but even in these areas, they produced mostly flashy toys rather than strategically relevant weapons.

Overall, I think it's clear that the insanity of the regimes running Germany and Japan hampered their technological progress and also led to their suicidal aggressiveness. At the same time, the relative sanity of the regimes running the U.K. and the U.S. did result in significant economic and technological advantages, as well as somewhat saner strategy. Of course, that need not have been decisive -- after all, the biggest winner of the war was Stalin, who was definitely closer to the defeated sides in all the relevant respects, if not altogether in the same league with them.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-12T01:45:47.402Z · LW(p) · GW(p)

Ok. So all my World War 2 examples have now decisively been shown to be wrong. I don't have any other modern examples to give that go in this direction. All other modern examples go pretty strongly in the other direction. I withdraw the claim wholesale and am updating to accept the claim for post-enlightenment human societies.

comment by curi · 2011-04-09T22:50:37.925Z · LW(p) · GW(p)

This sort of claim seems to run into historical problems

Athens lost to sparta. But it was a close call. Sparta excelled at nothing but war. Athens spread its efforts around and was good at everything. And it was close! That's how much more powerful Athens was: it did tons of other stuff and nearly won the war anyway.

If Athens had had an extra 100 years to improve, it would have gotten a big lead on Sparta. Long term, that kind of society wins.

A lot of major expansionist violent empires have done quite well for themselves.

Not long term.

Or to use a different, but potentially more controversial example, in North America and in Australia, the European colonizers won outright, despite having extremely violent, expansionist policies.

They were up against closed societies that were much worse than they themselves were in pretty much every respect including morally. The natives were not non-violent philosophers.

comment by curi · 2011-04-09T22:00:04.604Z · LW(p) · GW(p)

Merely pulling the trigger less often doesn't change the inevitability of doom. [...] One of the most important uses of technology is to counteract disasters and to recover from disasters, both from foreseen and unforeseen evil. Therefore, the speed of progress itself is one of the things that is a defense against catastrophe.

The idea is: if you're going to pull the trigger once every 100 years, instead of once every 5, and it's a 2% chance of doom each time, you're still doomed eventually. Any static society is doomed in that way. The delays don't help anything because nothing is changing in the mean time, so eventually doom happens.

The attitude of not making progress, but just trying to sustain a fixed lifestyle forever, cannot work. Even if the chance of doom per year is made low, there is some chance so it will have to destroy them eventually. There's nothing to stop it from doing so.

It's only in a dynamic society creating new knowledge and progress that lasting longer matters to whether you're doomed eventually, b/c in that extra time more progress is made.

comment by curi · 2011-04-09T20:55:15.546Z · LW(p) · GW(p)

(If it's unknowable, how can we know that a certain prediction strategy is going to be systematically biased in a known direction? Biased with respect to what knowable standard?)

I forget how much detail there is on this later in this talk, but it is in his book. The systematic bias towards pessimism is due to the method of trying to imagine the future using today's knowledge (which is less than the future's knowledge).

Quoting Deutsch from The Beginning of Infinity:

Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism. For example, in 1894, the physicist Albert Michelson made the following prophecy about the future of physics:

The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. … Our future discoveries must be looked for in the sixth place of decimals. (Albert Michelson, address at the opening of the Ryerson Physical Laboratory,
 University of Chicago, 1894)

What exactly was Michelson doing when he judged that there was only an ‘exceedingly remote’ chance that the foundations of physics as he knew them would ever be superseded? He was prophesying the future. How? On the basis of the best knowledge available at the time. But that consisted of the physics of 1894! Powerful and accurate though it was in countless applications, it was not capable of predicting the content of its successors. It was poorly suited even to imagining the changes that relativity and quantum theory would bring – which is why the physicists who did imagine them won Nobel prizes. Michelson would not have put the expansion of the universe, or the existence of parallel universes, or the non-existence of the force of gravity, on any list of possible discoveries whose probability was ‘exceedingly remote’. He just didn’t conceive of them at all.

Replies from: FAWS
comment by FAWS · 2011-04-10T01:51:08.871Z · LW(p) · GW(p)

It's inconsistent to expect the future to be better than one expects. If you think your probability estimates are too pessimistic adjust them until you don't know whether they are too optimistic or too pessimistic. No one stops you from assigning probability mass to outcomes like "technological solution that does away with problem X" or "scientific insight that makes the question moot". Claimed knowledge that the best possible probability estimate is biased in a particular direction cannot possibly ever be correct.

comment by curi · 2011-04-09T23:16:30.866Z · LW(p) · GW(p)

Basically, you can't predict the moves of a chess AI, otherwise you'd be at least that good chess player yourself, and yet you know it's going to win the game.

As someone who has beaten chess programs, I have noticed that this sentence as written is false. Would you care to refine it so that it's no longer straightforwardly false?

Replies from: Vladimir_Nesov, Larks, PhilGoetz
comment by Vladimir_Nesov · 2011-04-10T00:00:41.220Z · LW(p) · GW(p)

As someone who has beaten chess programs

Corrected to "good chess AI". LCPW applies.

comment by Larks · 2011-04-09T23:24:11.723Z · LW(p) · GW(p)

trivial refinement: "a chess AI that is much better than you", or "any chess AI that has beaten a grandmaster", as I assume neither you nor Nesov are grandmasters.

Replies from: curi
comment by curi · 2011-04-09T23:29:02.432Z · LW(p) · GW(p)

If you understand that a program plays chess well, then you have an understanding of the matter. It's not prophecy apply your understanding.

The chess computer isn't even relevant here. I can understand something about how Kasparov plays, and how I play, and then predict he'll beat me. So what?

Replies from: Larks
comment by Larks · 2011-04-09T23:33:07.324Z · LW(p) · GW(p)

It shows you can know general facts about a system that creates knew knowledge, despite not knowing all the specific facts/bits of knowledge that it will create. We can know Kasparov will beat us despite not knowing exactly what move he'll take; we can know that an AGI will destroy/save/whatever us despite not knowing exactly how.

Replies from: curi
comment by curi · 2011-04-09T23:36:53.675Z · LW(p) · GW(p)

Chess playing programs don't create new knowledge.

So, the argument is wrong without me fixing it (human chess players do).

Small amounts of new knowledge in very limited areas is predictable. Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.

This doesn't have much to do with large scale prediction that depends on new types of knowledge, does it?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-09T23:52:56.902Z · LW(p) · GW(p)

Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that "type" isn't a well-defined notion in this context.)

Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.

Actually, writers sometimes start a book and find part way through that they don't want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brandon Sanderson's next Mistborn book will be awesome. I can predict that it will sell well, and get good reviews. I can even predict a fair number of plot points just based on stuff Sanderson has done before and various comments he has made. But, at the same time, I can't write a novel nearly as well as he does, and if he and I were to have a novel writing contest, he will beat me. I don' t know how he will beat me, but he will.

Similarly, a sufficiently smart AI has the same problem. If it decides that human existence is non-optimal for it to carry out its goals, then it will try to find ways to eliminate us. It doesn't matter if all the ways it comes up with of doing so are in a fairly limited set of domains. If it is really good at chemistry is might make nasty nanotech to reduce organic life into constituent atoms. If it is really good at math it might break all our cryptography, and then hack into our missiles and trigger a nuclear war (this one is obvious enough that there are multiple movies about it). If it is really good at social psychology it might manipulate us over a few years into just handing over control to it.

Just as I don't know how Kasparov will beat me but I know he will, I don't know how a sufficiently intelligent AI will beat me, but I know it will. There may be issues with how sufficiently intelligent it needs to be and whether or not an AGI will be likely undergo fast, substantial, recursive self-improvement to get to be that intelligent is an issue of much discussion on LW (Eliezer considers it likely. Some other people such as myself consider it to be unlikely.) but the basic point about sufficient intelligent seems clear.

Replies from: curi
comment by curi · 2011-04-10T00:00:11.317Z · LW(p) · GW(p)

Whether you call it new knowledge or not it not relevant.

Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.

Actually, writers sometimes start a book and find part way through that they don't want to finish,

I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

@Sanderson: you understand what kind of thing he's doing pretty well. writers are a well known phenomenon. the less you know what processes he uses to write, what tradition he's following -- in general what's going on -- the less you can make any kind of useful predictions.

If it decides that human existence is non-optimal for it to carry out its goals

why would it?

Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster). Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast.

Replies from: Vladimir_Nesov, JoshuaZ
comment by Vladimir_Nesov · 2011-04-10T00:10:07.811Z · LW(p) · GW(p)

And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

The fallacy of gray? Between zero chance of winning a lottery, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.

Replies from: curi
comment by curi · 2011-04-10T00:16:17.917Z · LW(p) · GW(p)

No, the fallacy of you not paying attention to the context of statements, and their purpose.

I said authors predict they will finish books.

Someone told me that those predictions are not 100% accurate.

I said, basically: so what? And I pointed out his same "argument" works just as well (that is, not at all) in other cases.

So, the other guy did the "fallacy of gray", not me. And you didn't read carefully.

Replies from: Vladimir_Nesov, JoshuaZ
comment by Vladimir_Nesov · 2011-04-10T00:21:39.275Z · LW(p) · GW(p)

I said, basically: so what? And I pointed out his same "argument" works just as well (that is, not at all) in other cases.

There is a not-order-of-epsilon difference between an order-of-epsilon difference and a plausible difference. You winning against Kasparov vs. a writer finding part way through a book that they don't want to finish.

Replies from: curi
comment by curi · 2011-04-10T00:26:33.813Z · LW(p) · GW(p)

You've assumed i'm a chess beginner. You did the same thing when you assumed i never beat any halfway decent chess program. I'm actually a strong player and don't have a 0.0001% chance against kasparov. i have friends who are GMs who i can play decent games with.

also, didn't i specify a writer can make such a prediction before being 100% done? e.g. at 99.9%. or, perhaps 90%. it depends. but i didn't just say when part way done. you don't read carefully enough.

here it was

Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-10T00:31:12.240Z · LW(p) · GW(p)

You've assumed i'm a chess beginner.

That you are not an order-of-Kasparov chess player is the right prior, even if in fact it so turns out that you happen to be Kasparov himself. These people are rare, and you've previously given no indication to me that you're one of them. But again, LCPW.

Replies from: curi
comment by curi · 2011-04-10T00:36:57.566Z · LW(p) · GW(p)

It's not correct to assume a statement i make is wrong, based on your prior about how much I know about chess. I used my own knowledge of how much i know about chess when making statements. you should respect that knowledge instead of ignore it and assuming i'm making basic LCPW mistakes (btw Popper made that same point too, in a different way. of course i know it.). or at least question my statement instead of assuming i'm wrong about how much i know about chess. you're basically assuming i'm an idiot who makes sloppy statements. if you really think that you shouldn't even be talking to me.

btw i've noticed you didn't acknowledge your other mistakes or apologize. is that because you refuse to change your mind, or what?

Replies from: JGWeissman, Vladimir_Nesov
comment by JGWeissman · 2011-04-10T00:41:20.890Z · LW(p) · GW(p)

you should respect that knowledge instead of ignore it and assuming i'm making basic LCPW mistakes

It is easily observable in this thread that you are making LCPW mistakes. You haven't solved the game of chess, therefore the Least Convenient Possible World contains an AI powerful enough to explore the entire game tree of chess, solve the game, and beat you every time.

Replies from: curi
comment by curi · 2011-04-10T01:13:47.839Z · LW(p) · GW(p)

You could make a program like that. So what? No one gave an argument why the possibility of making such a program like that actually contradicts Deutsch. Such a program wouldn't be creating knowledge as it played (in Deutsch's terminology), it'd be doing some pretty trivial math (the hard part being the memory and speed for dealing with all the data), so it can't be an example of the unpredictability of knowledge creation in Deutsch's sense.

My initial point was merely that a statement was false. I think that's important. We should try to correct our mistakes, starting with the ones we see first, and then after correcting them we might find more.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-10T01:35:44.678Z · LW(p) · GW(p)

Such a program wouldn't be creating knowledge as it played (in Deutsch's terminology)

If that is true, (and you don't just mean that it only generated the knowledge when it solved the game initially, and is merely looking up that knowledge during the game), then I don't care much about whatever it is that Deutsch calls knowledge.

My initial point was merely that a statement was false.

It was not false. You were just confused about the referent of "chess AI".

comment by Vladimir_Nesov · 2011-04-10T00:40:30.069Z · LW(p) · GW(p)

It so happens that I acknowledged this mistake.

Replies from: curi
comment by curi · 2011-04-10T03:48:57.127Z · LW(p) · GW(p)

I saw. That's no reason not to do the same with others. It doesn't change that you imagined a convenient world where i'm bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details. It doesn't change that you misread my position in the stuff about authors. And so on.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-10T04:06:02.712Z · LW(p) · GW(p)

It doesn't change that you imagined a convenient world where i'm bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details.

On an absolute scale, you are bad at chess.

comment by JoshuaZ · 2011-04-10T00:20:33.423Z · LW(p) · GW(p)

I think you missed my point about the books. I may not have made it very clear and so I apologize. The point was that even in an area which you consider to be a small type of knowledge the actual results can be extremely unpredictable.

comment by JoshuaZ · 2011-04-10T00:18:38.380Z · LW(p) · GW(p)

Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.

Then the define the term.

I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

So what? How is that at all relevant. It isn't 100% guaranteed that if I jump off a tall building that I will then die. That doesn't mean I'm going to try. You can't use the fact that something isn't definite as an argument to ignore the issue wholesale.

Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster).

Ok. So I'm someone who finds extreme recursive self-improvement to be unlikely and I find this to be a really unhelpful argument. Improvements in speed matter. A lot. Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small. That means that the AI will do pretty much everything faster, and the more computing power it gets the more disparity there will be between it and the entities that don't have access to this algorithm. It wants to engineer a new virus? Oh what luck, protein folding is under many models NP-compete. The AI decides to improve its memory design? Well, that involves graph coloring and the traveling salesman, also NP-complete problems. The AI decides that it really wants access to all the world's servers and add them to its computational power? Well most of those have remote access capability that is based on cryptographic problems which are much weaker than NP-complete. So, um, yeah. It got those too.

Now, this scenario seems potentially far-fetched. After all, most experts consider it to be unlikely that P=NP, and consider it to be extremely unlikely that there's any sort of fast algorithm for NP complete problems. So let's just assume instead that the AI tries to make itself a lot faster. Well, let's see, what can our AI do. It could give itself some nice quantum computing hardware and then use Shor's algorithm to break factoring in polynomial time and then all AI can just take over lots of computers and have fun that way.

Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast

This is not at all obvious. Humans can't easily self-modify our hardware. We have no conscious access to most of our computational capability, and our computational capability is very weak. We're pathetic sacks of meat that can't even multiply four or five digits numbers in our heads. We also can't save states and swap out cognitive modules. An AGI can potentially do all of that.

Don't underestimate the dangers of a recursively self-improving entity or the value of speed.

Replies from: curi
comment by curi · 2011-04-10T01:36:02.040Z · LW(p) · GW(p)

Then the define the term.

See the essay on knowledge: http://fallibleideas.com/

Or read Deutsch's books.

It isn't 100% guaranteed that if I jump off a tall building that I will then die.

Indeed. You're the one who told me that writers sometimes don't finish books... They aren't 100% guaranteed to. I know that. Why did you say that?

Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small.

Umm. Imagine a human does the same thing. What's your point? My/Deutsch's point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.

We're pathetic sacks of meat that can't even multiply four or five digits numbers in our heads.

That's not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-10T01:51:49.475Z · LW(p) · GW(p)

The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.

And yes, I've read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?

Umm. Imagine a human does the same thing. What's your point? My/Deutsch's point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.

There are a lot of differences. Humans won't in general have an easy a time modifying their structure. Moreover, human values fall in a very small cluster in mindspace. Humans aren't for example paperclip maximizers or Pi digit calculators. There are two twin dangers, an AGI is has advantages in improving itself and an AGI is unlikely to share to our values. Those are both bad.

That's not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training.

Sure. Humans can do that if they train a lot. A simple computer can do that with much less effort, so an AGI which uses a digital base at all similar to a human won't need to spend days training to be able to multiply 5 digit numbers quickly. And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization. A human can't. And no amount of training will allow you to do so. Computers can do a lot of tasks we can't. At present, we can do a lot of tasks that they can't. A computer that can do both sets of tasks better than we can is the basic threat model.

Replies from: curi
comment by curi · 2011-04-10T05:17:42.288Z · LW(p) · GW(p)

There are a lot of differences. Humans won't in general have an easy a time modifying their structure.

But it doesn't matter because it's universal (we are universal thinkers, we can create any ideas that any thinking things can). The implementation details of universal things are not super important because the repertoire remains the same.

And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization.

Not by the method of thinking. A human can factor it using a calculator. An AI could also factor it using a calculator program. An AI doing it the way humans do -- by thinking -- won't be as fast!

Maybe taboo knowledge and make your point without it?

But it's the central concept of epistemology. If you want to understand me or Popper you need to learn to understand it. Many points depend on it.

And yes, I've read your essays, and nothing there is at all precise enough to be helpful.

Would you like to know the details of those essays? If you want to discuss them I can elaborate on any issue (or if I can't, I will be surprised and learn something, at least about my ignorance). If you want to discuss them, can we go somewhere else (that has other people who will know the answers to your questions too)? Tell me and I'll PM you the place if you're interested (I don't want too many random people to come, currently).

BTW no matter what you write people always complain. They always have questions or misconceptions that aren't the ones you addressed. No writing is perfect. Even if you were to write all of Popper's books, you'd still get complaints...

Replies from: shokwave
comment by shokwave · 2011-04-10T05:55:34.797Z · LW(p) · GW(p)

(we are universal thinkers, we can create any ideas that any thinking things can)

This seems about as likely as saying "We are universal runners, we can run on any surface that any running thing can". If you've been keeping up, you'd have heard that the brain is a lump of biological tissue, and as such is subject to limitations imposed by its substrate.

Replies from: curi
comment by curi · 2011-04-10T05:59:12.506Z · LW(p) · GW(p)

I don't think you should say

This seems about as likely as

in place of

This contradicts what I think I already know

And btw we can run on any surface, that any running thing can, with the aid of technology. What's the problem?

And instead of

If you've been keeping up

You should say what it really means:

I'm better than you, so I don't need to argue, condescension suffices

If you actually want an explanation of the ideas, apologize and ask nicely. If you just want to flame me for contradicting your worldview, then go away.

Replies from: shokwave
comment by shokwave · 2011-04-10T07:05:28.664Z · LW(p) · GW(p)

we can run on any surface, that any running thing can, with the aid of technology.

Mecho-Gecko disagrees..

You should say what it really means:

If you have been updating your worldview in light of evidence streaming in from neuroscience and biology, you'd have heard ...

Replies from: curi
comment by curi · 2011-04-10T07:07:51.912Z · LW(p) · GW(p)

You realize we can build new bodies with technology? Or maybe you don't...

Replies from: shokwave
comment by shokwave · 2011-04-10T07:10:31.499Z · LW(p) · GW(p)

And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?

Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?

Replies from: curi
comment by curi · 2011-04-10T07:43:37.295Z · LW(p) · GW(p)

You're funny. First you make up an analogy you think is false to say I'm wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you're saying I'm wrong because the analogy is true. Do you think at any point in this you were wrong?

Replies from: shokwave
comment by shokwave · 2011-04-10T07:53:04.824Z · LW(p) · GW(p)

(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn't find a clip that didn't include the comparison.)

No, I don't. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.

Having re-evaluated, I still believe I have been right all along.

To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won't do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko's leg does.

Replies from: curi
comment by curi · 2011-04-10T08:02:19.397Z · LW(p) · GW(p)

Do you have an argument that our brains do not have universality WRT intelligence?

Do you understand what the theory I'm advocating is and says? Do you know why it says it?

Replies from: shokwave
comment by shokwave · 2011-04-10T08:14:09.220Z · LW(p) · GW(p)

This constitutes a pretty good argument against our brains having universal intelligence.

I thought I understood what you meant by "universal intelligence" - that any idea that is conceivable, could be conceived by a human mind - but I am open to the possibility you are referring to a technical term of some sort. If you'd care to enlighten me?

Replies from: curi
comment by curi · 2011-04-10T08:15:29.197Z · LW(p) · GW(p)

Previously you refused to ask. Why did you change your mind?

Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn't engage with it. Do you not consider it important to know that?

I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.

Replies from: shokwave
comment by shokwave · 2011-04-10T08:30:22.501Z · LW(p) · GW(p)

Previously you refused to ask. Why did you change your mind?

I re-read our conversation looking for possible hidden disputes of definition. It's one of the argument resolution tools LessWrong has taught me.

Do you know what the arguments that human minds are universal are?

I don't claim familiarity with all of them. If you'd care to enlighten me?

I was unable to find any relevant argument at the link.

The strongest part would be this:

If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.

Conversely, every existential generalization - "there exists at least one mind such that X" - has two to the trillionth power chances to be true.

Replies from: curi
comment by curi · 2011-04-10T08:37:11.178Z · LW(p) · GW(p)

Why did you think you could tell me what link would refute my position, if you didn't know what arguments my position consisted of?

BTW you have the concept of universal correct.

Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?

edit: Thinking about it more, I think he's got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that's not what the computer design space looks like.

Replies from: shokwave
comment by shokwave · 2011-04-10T08:47:44.648Z · LW(p) · GW(p)

Because my link wasn't a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.

It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.

Replies from: curi
comment by curi · 2011-04-10T08:54:05.777Z · LW(p) · GW(p)

Why did you say

This constitutes a pretty good argument against our brains having universal intelligence.

If you meant

This constitutes a pretty good statement of the correct position, with no argument against your position.

Do you understand the difference between an argument which engages with someone's position, and simply a statement which ignores them?

I've run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it's good? Why is it good? Doesn't it go wrong whenever there is a criticism you don't know about, or another way of thinking which is better that you don't know about? Doesn't it tend to not seek those things out since you think your position is correct and that's that?

Replies from: shokwave
comment by shokwave · 2011-04-10T15:15:27.813Z · LW(p) · GW(p)

Do you understand the difference between an argument which engages with someone's position, and simply a statement which ignores them?

Yes. In practical terms of coming to the most correct worldview, there isn't much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters - what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don't need to show you that it's wrong, or how it's wrong - the mere existence of my correct thing does more than enough.

I've run into this kind of issue with several people here.

I noticed; hence why I caused this particular exchange.

In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical.

We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.

Do you think it's good? Why is it good? Doesn't it go wrong whenever there is a criticism you don't know about, or another way of thinking which is better that you don't know about? Doesn't it tend to not seek those things out since you think your position is correct and that's that?

I do think this new, updated description is good. It's good because reversed stupidity isn't intelligence. It's good because it's a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they're not.

That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it's falsified, criticized, or refuted. It's rude, impolite, gauche to continue believing something that's falsified. As long as you don't believe anything that's wrong. And so on.

Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can't just beg the question and say it's not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.

And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn't contradict anything in the older theory.

I put it to you that Popperians would be almost unanimously supporting the first theory - they would have learned of it first, and seen no reason to change - no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.

And the Bayesians would be right.

Replies from: TheOtherDave, curi
comment by TheOtherDave · 2011-04-10T15:47:36.807Z · LW(p) · GW(p)

Upvoted for being merit-worthily well-expressed, despite my desire to see less of this discussion thread in general.

Replies from: shokwave
comment by shokwave · 2011-04-10T16:05:32.352Z · LW(p) · GW(p)

I can't take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-10T16:12:28.671Z · LW(p) · GW(p)

(nods) Yes, the latter is what I was considering meritorious.

I mention it not because it's a huge deal -- it isn't, and ordinarily I would have just quietly upvoted it -- but given that I really don't want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.

comment by curi · 2011-04-10T18:44:58.336Z · LW(p) · GW(p)

I put it to you that Popperians would be almost unanimously supporting the first theory

As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.

the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.

But what if you're making a mistake? Don't we need criticism just in case your way of building up the truth has a mistake?

Popper fetish

I see that you do like one kind of criticism: ad-hominems.

if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong.

Logically, yes. But do you have a correct thing? What if you don't. That's why you need criticism. Because you're fallible, and your methods are fallible too, and your choice of methods fallible yet again.

That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it's falsified, criticized, or refuted. It's rude, impolite, gauche to continue believing something that's falsified.

As a Popperian far more familiar with the Popperian community than you, let me tell you:

this is wrong. This is not what Popperians think, it's not what Popper wrote, it's not what Popperians do.

Where are you getting this nonsense? Now that I've told you it's not what we're about, will you reconsider and try to learn our actual views before you reject Popper?

Replies from: shokwave
comment by shokwave · 2011-04-11T02:28:16.578Z · LW(p) · GW(p)

As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.

Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory - the second theory didn't even exist around the time the first theory picked up momentum.

Replies from: curi
comment by curi · 2011-04-11T02:47:11.389Z · LW(p) · GW(p)

You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.

Replies from: shokwave
comment by shokwave · 2011-04-11T03:19:34.017Z · LW(p) · GW(p)

But there is no criticism of the old theory! At least, no criticism that isn't easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!

This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy 'decoherence', Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.

Replies from: wnoise, curi
comment by wnoise · 2011-04-11T03:53:42.908Z · LW(p) · GW(p)

Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.

Replies from: DanielLC
comment by DanielLC · 2011-04-11T03:56:42.888Z · LW(p) · GW(p)

I think he meant wave-form collapse.

comment by curi · 2011-04-11T03:24:21.376Z · LW(p) · GW(p)

But the Copenhagen interpretation has no defense. It doesn't even make sense.

Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.

I think you've basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found -- on this topic and many others -- and plenty of unanswered and unanswerable criticisms of Copenhagen.

Conclusive doesn't mean infallible, but it does mean that it actually resolves the issue and doesn't allow for:

easily dismantled by proponents of the old theory

The original statement was:

Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious.

Clunkier is a criticism.

comment by PhilGoetz · 2011-04-10T00:39:48.399Z · LW(p) · GW(p)

-5 points seems harsh for a statement that is technically correct.

Replies from: wedrifid, JoshuaZ, Desrtopa, curi
comment by wedrifid · 2011-04-10T01:34:29.962Z · LW(p) · GW(p)

I would prefer not to see any more comments by curi in conversations by Popper. The quality of discussion makes continued exposure unpleasant. This makes a decision to downvote all such comments appropriate.

comment by JoshuaZ · 2011-04-10T00:54:27.467Z · LW(p) · GW(p)

I disagree. It is a good example where it is obvious or close to obvious what was intended. The remark simply damaged the signal to noise ratio while avoiding grappling with the point.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-04-10T02:59:26.394Z · LW(p) · GW(p)

True - but I don't think it would ordinarily have been down-voted that hard, for that sin.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-10T03:01:33.064Z · LW(p) · GW(p)

True - but I don't think it would ordinarily have been down-voted that hard, for that sin.

It is possible that some general annoyance with the user also resulted in the total.

comment by Desrtopa · 2011-04-10T14:49:34.647Z · LW(p) · GW(p)

Yes, but it's the worst sort of manifestation of this sort of behavior; if someone will attempt to generate conflict by nitpicking when they could so easily have interpreted the argument themselves in such a way as to render it unnecessary, can they be trusted to take arguments as seriously as they deserve to be?

comment by curi · 2011-04-10T01:03:42.138Z · LW(p) · GW(p)

Yes indeed. But also -- and maybe this is only a Popperian thing you guys think is wrong? -- I find that correcting statements, instead of just saying them wrong and leaving it at that, often leads to better understanding. Sometimes you find it's not as easy to correct as you assumed, and maybe change your conclusion a bit.

I think it's easy to make mistakes without realizing it -- happens all the time -- and that not making blatant mistakes -- or at least caring about them and correcting them when you do, rather than deeming it unimportant -- is a good start for dealing with the harder ones.

Replies from: JoshuaZ, Emile
comment by JoshuaZ · 2011-04-10T01:24:56.560Z · LW(p) · GW(p)

Yes indeed. But also -- and maybe this is only a Popperian thing you guys think is wrong? -- I find that correcting statements, instead of just saying them wrong and leaving it at that, often leads to better understanding. Sometimes you find it's not as easy to correct as you assumed, and maybe change your conclusion a bit.

No. You are missing the point. The easy correction would be for you to say "Well, the chess claim might not be true. But your point still goes through if I used Go and one of the world's best Go players or some chess variant like Andernach chess or cylindrical chess or Capablanca chess." And then respond to the argument in that form.

It isn't helpful to pick out a small problem with an argument someone makes and then ignore the rest of the argument until they've responded to doing so. It might feel fun, and it might be rhetorically impressive in some circumstances, but it doesn't really help resolving disagreement or improving understanding of what people are trying to communicate.

Replies from: curi
comment by curi · 2011-04-10T05:07:40.838Z · LW(p) · GW(p)

It isn't helpful to pick out a small problem with an argument

But that's just my point: it is.

We need to learn not to make small mistakes.

That's why the big problems are so hard: too many small mistakes everywhere.

Fix the little stuff. Then fix a little more. You make progress.

People should appreciate every single little mistake they make being pointed out, and should strive to stop making them. If not making little mistakes is too hard for someone, not making big ones is out of reach.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-10T05:18:15.887Z · LW(p) · GW(p)

curi, please reread what I wrote. Please note that the whole sentence is not as you quoted "It isn't helpful to pick out a small problem with an argument" but "It isn't helpful to pick out a small problem with an argument someone makes and then ignore the rest of the argument until they've responded to doing so." Please also reread the first paragraph where I outlined the ideal approach, noting the issue, and supplying a correction yourself, and then replying to the corrected form.

Replies from: curi
comment by curi · 2011-04-10T05:31:18.744Z · LW(p) · GW(p)

I sometimes try to fix people's arguments for them.

What sometimes happens is they don't like or want the fixed form.

That especially happens a lot when it's a person with different methods of judging what is a good argument, and a rather different worldview than my own.

That issue applies here. Having people fix their own stuff is the less ambitious and less error prone approach. It's the one that is more resilient to frequent miscommunication.

I am especially not inclined to fix people's arguments when they are wrong either way. Fixing from wrong to still wrong is weird. In that case, I think it would be wise for people to fix mistakes in their argument, one by one. And starting with an easy one is good, not bad. If they won't even fix that, what is going to happen with a more subtle issue?

If I jump ahead to the fully clarified version of their argument -- written perhaps in a way that also helps make it easier to see why it's wrong -- people complain. If I write it in a way that makes it hard to see why it's wrong (as they've been doing by unconscious bias), then they might be a bit happier but we'll still have the problem of having to walk them through improvements of it until they get it to a better state.

Learning isn't super easy. You need patience and persistence. You need to be happy fixing one mistake at a time. You shouldn't complain that people aren't helping you skip steps. That it's too slow. That if only I would reply to what I know they meant, instead of what they actually said, we'd make progress faster. Attempts at mind reading increase miscommunication difficulties and misunderstanding. They usually seem to work well because the two people do it share a ton of background knowledge, cultural assumptions, biases, and so on.

If I was replying to what people really meant it'd just create a mess. I would reply to a lot of their unconscious biases they weren't aware they had and they'd just get confused. And yet many of their statements express those unconscious ideas. To learn, they need to engage with the process of improving their ideas, not just insist I should be able to improve their stuff to the point it's true -- without changing the conclusion -- and then concede.

As to skipping ahead while smaller issues are still pending, I wonder why you think building on rotten foundations is wise. I think it can work sometimes, but it's a bit ambitious.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-04-10T14:24:55.786Z · LW(p) · GW(p)

I am especially not inclined to fix people's arguments when they are wrong either way. Fixing from wrong to still wrong is weird.

Not really, it's a common method for showing that someone is very wrong. It's just the common "But let's suppose we fix that [alternatively, that I spot you that for now] - even then there's still a problem, as..."

Regarding much of the rest of the post: The idea is not to silently reply to a corrected version, but to explicitly note the correction and reply to that! Then people can, rather than just being confused about your correction, actually evaluate your corrected version and verify whether or not it still conforms to their intentions.

As to skipping ahead while smaller issues are still pending, I wonder why you think building on rotten foundations is wise. I think it can work sometimes, but it's a bit ambitious.

Hence you fix those foundations, rather than silently building on top of them.

Replies from: curi
comment by curi · 2011-04-10T19:31:09.185Z · LW(p) · GW(p)

I am especially not inclined to fix people's arguments when they are wrong either way

Not really, it's a common method

I don't like attributing to people false ideas they didn't actually write. I think that's a recipe for disaster. You disagree?

I wasn't talking about silent corrections either.

comment by Emile · 2011-04-10T14:36:08.104Z · LW(p) · GW(p)

Yes indeed. But also -- and maybe this is only a Popperian thing you guys think is wrong?

This has nothing to do with Popper (I hope, not having read much Popper myself), and everything to do with obnoxious nitpicking in bad faith.

comment by Manfred · 2011-04-10T02:00:55.596Z · LW(p) · GW(p)

I stopped listening fairly quickly, after determining that it was rubbish from a Bayesian perspective. Specifically I stopped listening when he says that the future of humanity is different from russian roulette because the future can't be modeled by probability. This is the belief that there is a basic "probability-ness" that dice have and gun chambers have but people don't, and that things with "probability-ness" can be described by probability, but things without "probability-ness" can't be. But of course, we're all fermions and bosons in the end - there is no such thing as "probability-ness," probability is simply what happens when you reason from incomplete information.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-04-10T09:39:45.099Z · LW(p) · GW(p)

Duetsch is arguing (and I think correctly) that there's a difference between knowing the full range of possibilities in a system and not knowing it.

Replies from: Manfred
comment by Manfred · 2011-04-10T16:05:23.111Z · LW(p) · GW(p)

That seems pretty reasonable. "What will the future be like" is a pretty undetermined question.

However, he was applying this same logic to "will civilization be destroyed," where "destroyed" and "not destroyed" are a pretty complete range of possibilities.

Unless maybe he meant that you have to know every possible way civilization could be destroyed in order to estimate a probability, which seems like searching for a reason that civilization doesn't have probability-ness.

comment by Vladimir_Nesov · 2011-04-09T23:45:22.198Z · LW(p) · GW(p)

I think this talk motivates a Yudkowsky-Deutsch debate on bloggingheads.

Replies from: alexflint
comment by Alex Flint (alexflint) · 2011-04-10T17:17:11.873Z · LW(p) · GW(p)

Oh boy oh boy oh boy that would rock my socks

comment by CarlShulman · 2011-04-09T19:10:08.861Z · LW(p) · GW(p)

This should not have been made as a top-level post without some more explanation to let people evaluate whether to watch the video.

Replies from: curi
comment by curi · 2011-04-09T19:13:46.166Z · LW(p) · GW(p)

I don't want to bias the reactions.

Replies from: pjeby, JoshuaZ
comment by pjeby · 2011-04-09T19:30:11.706Z · LW(p) · GW(p)

You might want to move this to the discussion section, then; unadorned links like this are generally not considered appropriate to the main LW section.

(You can move it by editing the article, then changing where it's being published to.)

Replies from: Dorikka
comment by Dorikka · 2011-04-09T20:33:41.662Z · LW(p) · GW(p)

Yep. I would downvote this, but it's already invisible on the top-level page.

Replies from: Larks
comment by Larks · 2011-04-09T23:21:36.803Z · LW(p) · GW(p)

The main page is for things we think of sufficient quality that they're worth the time and cognitive effort of reading. Is this worth an hour of your time to read? If not, it should be downvoted to invisibility.

Replies from: Dorikka
comment by Dorikka · 2011-04-09T23:32:38.098Z · LW(p) · GW(p)

As of now and when I first saw the post appear on the sidebar, it is/was invisible on the main page and visible only through the sidebar.

Replies from: Larks
comment by Larks · 2011-04-09T23:34:46.842Z · LW(p) · GW(p)

Yup.

It's worth noting in general that the 'main page' is actually the 'promoted' page, which requires an admin to move you there. But you're right, the article is also not visible on the 'new' page either.

comment by JoshuaZ · 2011-04-09T21:17:14.407Z · LW(p) · GW(p)

Unfortunately, this attitude and your decision to put this in main rather than the discussion section is getting it downvoted. That will likely continue. Moreover, downvotes for main section articles hurt a lot more than downvotes in the discussion section. I strongly urge you to move this into the discussion section where it will be considered a much more reasonable post.

Replies from: PhilGoetz, curi
comment by PhilGoetz · 2011-04-10T00:44:35.993Z · LW(p) · GW(p)

Everyone who downvotes links posted in the main section because they think it's a cheap way to get karma - you can just choose not to vote for them. Thus, trying to discourage people from posting to the main page for karma reasons is trying to make karma voting decisions for other LWers.

Karma is supposed to indicate which articles and comments are worth reading. Karma doesn't function to tell people whose opinions to respect, so people should stop worrying that other people are getting easy karma. Trust me - I have 15,000 karma, and people don't cut me any more slack than when I had none.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-10T00:51:26.738Z · LW(p) · GW(p)

Curi's karma has repeatedly dropped low enough that his posting rate is moderated. If that's going to happen then it should occur based on the quality of posts not to him being socially tone-deaf about community norms of where to post things.

(Incidentally, there's another reason to downvote short link posts and the like in the main section- some people just have the RSS feed for the main posts and don't want every little link to show up).

Replies from: PhilGoetz
comment by PhilGoetz · 2011-04-10T00:56:32.582Z · LW(p) · GW(p)

Curi's karma has repeatedly dropped low enough that his posting rate is moderated. If that's going to happen then it should occur based on the quality of posts not to him being socially tone-deaf about community norms of where to post things.

That's Curi's decision.

(Incidentally, there's another reason to downvote short link posts and the like in the main section- some people just have the RSS feed for the main posts and don't want every little link to show up).

Okay - a valid reason.

I would still like to say that, when considering whether to impose a social norm against posting certain things on the main page, saying that you think they're unworthy of karma is not a good reason, because (a) karma point accumulation to users beyond getting enough to post does not give them any advantage, and (b) you can choose not to vote, and therefore you can object only because you don't trust the judgement of other users on LW and so would like to deprive them of the freedom to vote for such articles.

This may not have been your reason, but this seemed like a good place to make my point.

comment by curi · 2011-04-10T20:35:08.819Z · LW(p) · GW(p)

Unfortunately, this attitude and your decision to put this in main rather than the discussion section is getting it downvoted. That will likely continue.

It didn't. It made it back up to a score of 0.

Learn anything? Or since you only said "likely" will you say that your prediction isn't contradicted by the result. Is never actually being contradicted by evidence one of the main appeals of only saying stuff is likely instead of conjecturing that it's true?

Replies from: Randaly, None
comment by Randaly · 2011-04-11T16:19:03.574Z · LW(p) · GW(p)

As far as I know, the minimum possible karma is zero; scores below that are, IIRC, displayed as zero.

Replies from: Sniffnoy, JoshuaZ
comment by Sniffnoy · 2011-04-12T02:14:13.823Z · LW(p) · GW(p)

Scores in the negative are kept track of despite not being displayed, however. In particular, people with negative karma have a commenting frequency limit, whereas people with zero karma do not.

comment by JoshuaZ · 2011-04-11T16:22:58.297Z · LW(p) · GW(p)

Scores for total karma are displayed as zero if they are negative. Scores for individual articles can be negative (and in fact it is back to -1). I have various hypotheses about why the score has moved up but I'm waiting to gather more evidence before I state them.

comment by [deleted] · 2011-04-10T20:43:25.965Z · LW(p) · GW(p)

It made it back up to a score of 0.

The circle displaying your karma can't display negative scores.

comment by Vladimir_Nesov · 2011-04-10T01:51:32.211Z · LW(p) · GW(p)

It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky.

To clarify what I originally misinterpreted on reading this description: according to this page, Yudkowsky was giving a talk on 25 Jan 2011, while Deutsch on 10 Mar 2011, so "previous speaker" doesn't refer to giving talks in succession.

comment by Perplexed · 2011-04-10T19:51:56.627Z · LW(p) · GW(p)

Thanks for posting this. I would definitely enjoy seeing a debate between Deutsch and Yudkowsky.

The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET's morality will inevitably be superior to our own. And the slogan: "All evils are due to lack of knowledge". Why does this kind of thing remind me of George W. Bush?

But I agreed with some parts of his argument for the superiority of a a Popperian approach over a Bayesian one when 'unknown unknowns' regarding the growth of knowledge are involved. For example, 42:30 in when he quotes Popper advising us to drop the hopeless search for an inerrant source of knowledge, and to instead search for a fairly reliable method of eliminating error once it has become established. Maybe a good idea.

I have mixed feelings, though, about his advocacy of optimism. He argues that Malthus's pessimistic predictions failed simply because Malthus had no way of foreseeing the positive effects of the growth of knowledge. But by the same token, optimistic predictions of a positive future for mankind are also liable to fail because they attempt to predict that the growth of knowledge will include specific breakthroughs.

Replies from: Eugine_Nier, timtyler, curi
comment by Eugine_Nier · 2011-04-10T22:54:24.123Z · LW(p) · GW(p)

And the slogan: "All evils are due to lack of knowledge". Why does this kind of thing remind me of George W. Bush?

Well, it reminds me of Plato, which is much more damning.

comment by timtyler · 2011-04-15T15:41:59.874Z · LW(p) · GW(p)

The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET's morality will inevitably be superior to our own.

This seems pretty daft to me too. It looks like a kind of moral realism - according to which being eaten by aliens might well be "good" - since it leads to more "goodness".

Replies from: Perplexed
comment by Perplexed · 2011-04-15T15:54:43.473Z · LW(p) · GW(p)

Right. But moral realism is not necessarily daft. It only becomes so when you add in universalism and a stricture against self-indexicality.

Replies from: timtyler
comment by timtyler · 2011-04-15T18:21:58.267Z · LW(p) · GW(p)

I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality - rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.

Replies from: Perplexed
comment by Perplexed · 2011-04-15T18:49:59.704Z · LW(p) · GW(p)

It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.

My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced "social contract" will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.

Replies from: timtyler
comment by timtyler · 2011-04-16T13:14:41.092Z · LW(p) · GW(p)

It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later.

...or if you think what we ought to be doing is helping to create the thing with the universal moral values.

I'm not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won't agree on all their values - perhaps due to moral spontaneous symmetry breaking - and small differences can become important.

comment by curi · 2011-04-10T20:09:20.446Z · LW(p) · GW(p)

. And the slogan: "All evils are due to lack of knowledge".

You should read his book, The Beginning of Infinity. It's not a slogan but a philosophical position which he explains at length. Learn why he thinks it. He's not an idiot.

Since you partly agree with him, and have mixed feelings, I think it'd be worth looking into for you, so I wanted to let you know it's much more than a slogan! And "optimism" to DD does not mean "predicting a positive future", it's not about wearing rose colored glasses.

comment by Vladimir_Nesov · 2011-04-09T19:29:57.633Z · LW(p) · GW(p)

From the very beginning of the talk:

I don't have to persuade you that, for instance, life is better than death; and I don't have to explain exactly why knowledge is a good thing, and that the alleviation of suffering is good, and communication, and travel, and space exploration, and ever-faster computers, and excellence in art and design, all good.

One of these things is not like the others.

Replies from: lukeprog, curi
comment by lukeprog · 2011-04-09T19:50:33.238Z · LW(p) · GW(p)

Ever-faster computers jumped out at me when I first heard that sentence.

Replies from: Matt_Simpson, JoshuaZ
comment by Matt_Simpson · 2011-04-11T00:50:28.172Z · LW(p) · GW(p)

me too. Instrumental vs terminal values.

comment by JoshuaZ · 2011-04-09T21:14:35.566Z · LW(p) · GW(p)

Really? The comment about art and design jumped out at me.

Replies from: curi
comment by curi · 2011-04-09T21:17:00.756Z · LW(p) · GW(p)

FYI DD's talk on why flowers are beautiful:

http://193.189.74.53/~qubitor/people/david/index.php?path=Video/Why%20Are%20Flowers%20Beautiful

That URL is weird. In case it breaks, it's on youtube in parts:

http://www.youtube.com/watch?v=56o2n8sVvM8

comment by curi · 2011-04-09T19:31:25.793Z · LW(p) · GW(p)

Which?

comment by Larks · 2011-04-09T23:18:52.937Z · LW(p) · GW(p)

It's slow loading for me due to a slow internet connection, but if the questions at the end are included, I was the one who asked about insurance companies.

I don't think his response was very satisfactory, though I have a better version of my question.

Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.

It seems this can force you to assign probabilities to arbitrary hypothesis.

Replies from: Eugine_Nier, curi
comment by Eugine_Nier · 2011-04-10T16:31:30.779Z · LW(p) · GW(p)

Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.

It seems this can force you to assign probabilities to arbitrary hypothesis.

So, how precise should these probabilities be? Any why can't I apply this argument to force the probabilities to have arbitrary high precision?

Replies from: Larks
comment by Larks · 2011-04-10T18:57:27.506Z · LW(p) · GW(p)

Not that I can think of, besides memory/speed constaints, and how much updating you can have done with the evidence you've recieved.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-04-10T19:31:53.457Z · LW(p) · GW(p)

and how much updating you can have done with the evidence you've recieved.

Why can't it happen that you have so little and/or such weak evidence, that the amount of precision you should have is none at all?

Replies from: Manfred, Larks
comment by Manfred · 2011-04-10T20:01:44.550Z · LW(p) · GW(p)

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities, centered somewhere around "Obama has a 70% (or something) chance of winning." Then to make a decision based on that distribution using normal decision theory, you would average over the possible results of an action, weighted by the probability. But this is equivalent to taking the mean of your bell curve - no matter how wide or narrow the bell curve, all that matters to your (standard decision theory) decision is the location of the mean.

Less evidence is like a wider bell curve, more evidence like a sharper one. But as long as the mean stays the same, the average result of each decision stays the same, so your decision will also be the same.

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve. So when you say "precision," there is a possible confusion. Your first post was about the "how precise can these probabilities be," which was the first (and boring, since it's so high) kind of precision, while this post seems to be talking about the second kind, the kind that is more useful because it reflects how much evidence you have.

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2011-04-10T20:48:20.344Z · LW(p) · GW(p)

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve.

I'm not sure what you mean by the "true answer". After all, in some sense the true probability is either 0 or 1 it's just that we don't know which.

Replies from: Manfred
comment by Manfred · 2011-04-10T21:09:11.826Z · LW(p) · GW(p)

That's a good point. So I guess the second kind of precision doesn't make sense in this case (like it would if the bell curve were over, say, the number of beans in a jar), and "precision" should only refer to "precision with which we can extract an average probability from our information," which is very high.

comment by [deleted] · 2011-04-11T14:46:14.452Z · LW(p) · GW(p)

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities

Bell curves prefer to live on unbounded intervals! It would be less jarring, (and less convenient for you?), if he ended up with something looking like a uniform distribution over probabilities.

Replies from: Manfred
comment by Manfred · 2011-04-11T18:07:57.282Z · LW(p) · GW(p)

It's equally convenient, since the mean doesn't care about the shape. I don't think it's particularly jarring - just imagine it going to 0 at the edges.

The reason you'll probably end up with something like a bell curve is a practical one - the central limit theorem. For complicated problems, you very often get what looks something like a bell curve. Hardly watertight, but I'd bet decent amounts of money that it is true in this case, so why not use it to add a little color to the description?

comment by Larks · 2011-04-10T20:03:45.014Z · LW(p) · GW(p)

Well, your prior gives you a unique value, and bayes theorem is a function, so it gives you a unique value for every input.

Replies from: Eugine_Nier, Eugine_Nier
comment by Eugine_Nier · 2011-04-10T20:50:33.575Z · LW(p) · GW(p)

Well, your prior gives you a unique value,

So the claim is that you have arbitrary precision priors. What are they, and where are they stored?

Replies from: Larks
comment by Larks · 2011-04-10T21:38:21.466Z · LW(p) · GW(p)

Sorry, I haven't been very clear. A perfect bayesian agent would have a unique real number to represent it's level of belief in every hypothesis.

The betting-offer system I described about can force people (and force any hypothetical agent) to assign unique values.

Of course, an actual person won't be capable of this level of precision or coherence.

comment by Eugine_Nier · 2011-04-10T20:17:05.096Z · LW(p) · GW(p)

Yes, but actually computing that function is computationally intractable in all but the simplest examples.

comment by curi · 2011-04-09T23:23:36.078Z · LW(p) · GW(p)

That does not require probabilities. You could also come up with an explanation of at what value to switch.

Replies from: Larks
comment by Larks · 2011-04-09T23:29:55.443Z · LW(p) · GW(p)

In that case, we’re done. Standard probability theory/Cox Theorem/de Finette would give us a ready made criticism of any conjecture that wasn’t isomorphic to probability theory, so we’d have isomorphism, which is all we need. Once we have functional equivalence, we can prove results in probability theory, apply Bayes theorem, etc., and then at the end translate back into Popperesque.

(Also, IIRC, Jaynes only claimed to have proven that rational reasoning must be isomorphic to probability theory)

Replies from: curi
comment by curi · 2011-04-09T23:46:54.787Z · LW(p) · GW(p)

I don't quite get your point. You are saying that if you bring up betting (a real life scenario where probability is highly relevant), then given your explanations that help you come up with priors (background knowledge needed to be able to do any math about it), you shouldn't act on those explanations in ways that violates math. OK, so what? probability math is useful in some limited cases, given some explanatory knowledge to get set up. no one said otherwise.

Replies from: ata, Sniffnoy
comment by ata · 2011-04-10T18:40:55.096Z · LW(p) · GW(p)

You are saying that if you bring up betting (a real life scenario where probability is highly relevant)

Every decision is a bet.

comment by Sniffnoy · 2011-04-10T14:47:19.336Z · LW(p) · GW(p)

I think you are beginning to get the point. :) The key missing fact here is that in fact the resulting math is highly constraining, to the point that if you actually follow it all the way you will be acting in a manner isomorphic to a Bayesian utility-maximizer.

Replies from: curi
comment by curi · 2011-04-10T19:04:21.042Z · LW(p) · GW(p)

But the background knowledge part is highly not-constraining (just given your math). When a math algorithm gives constrained output, but you have wide scope for choice of input, it's not so good. you need to do stuff to constrain the inputs.

it seems to me you just dump all the hard parts of thinking into the priors and then say the rest follows. but the hard parts are still there. we still need to work out good explanations to use as input for the last step of not doing stuff that violates math/logic.

comment by NancyLebovitz · 2011-04-10T09:37:36.853Z · LW(p) · GW(p)

My first reaction to his unlimited progress riff was "every specific thing I care about will be gone". The answer is presumably that there will be more things to care about. However, that initial reaction is probably common enough that it might be worth working on replies to people who are less inclined to abstraction than I am.

I'll take the edge off his optimism somewhat by pointing out that individuals and cultures can be rolled over by change, even if the whole system is becoming more capable, and we care about individuals and cultures (especially if they're us or ours) as well as the whole system.. Taking European diseases to the New World happened by accident.

Still, the pursuit of knowledge and competence may well be the least bad strategy the vast majority of the time (rather than a guarantee of things becoming more wonderful for what we personally care about), and I'm intrigued by the idea of explicitly intending to increase the returns for cooperation.

comment by JGWeissman · 2011-04-09T19:17:00.212Z · LW(p) · GW(p)

How was Curi able to post this without having 20 karma?

Replies from: curi
comment by curi · 2011-04-09T19:18:34.226Z · LW(p) · GW(p)

I had 20 karma. I don't anymore. My karma has had a lot of fluctuations.

edit: see. back to 21 now.

comment by timtyler · 2011-04-15T14:50:51.007Z · LW(p) · GW(p)

Deutsch gives Malthus as an example of a failed pessimistic prediction - at 23:00. However, it still looks as though Malthus is likely to have been correct. Populations increase exponentially, while resources expand at most in a polynomial fashion - due to the light cone. Deutsch discusses this point 38:00 minutes in, claiming relatavistic time dilation changes this conclusion, which I don' t think it really does: you still wind-up with most organisms being resource-limited, just as Malthus described.

comment by timtyler · 2011-04-15T14:21:50.331Z · LW(p) · GW(p)

Martin Rees is misrepresented 4:04 in. What Rees actually said was:

the odds are no better than 50-50 that our present civilisation on Earth will survive to the end of the present century without a serious setback'

...whatever a "serious setback" is supposed to mean.

Replies from: vallinder
comment by vallinder · 2011-07-14T17:56:07.963Z · LW(p) · GW(p)

Do you have a reference for that? My copy of Our Final Hour contains the same sentence minus "without a serious setback".

Replies from: timtyler
comment by timtyler · 2011-07-14T22:35:07.715Z · LW(p) · GW(p)

Our Final Century, page 8 line 4.

It seems as though Rees - rather confusingly - said different things on the topic in Our Final Century and Our Final Hour.

Replies from: vallinder
comment by vallinder · 2011-07-18T10:52:06.326Z · LW(p) · GW(p)

Ah, that's interesting. Thanks for clarifying.