Posts

Comments

Comment by clockbackward on Argument Screens Off Authority · 2010-10-11T14:03:26.181Z · LW · GW

Unfortunately, in practice, being as knowledgable about the details of a particular scenario as an expert does not imply that you will process the facts as correctly as the expert. For instance, an expert and I may both know all of the facts of a murder case, but (if expertise means anything) they are still more likely to make correct judgements about what actually happened due to their prior experience. If I actually had their prior experience, it's true that their authority would mean a lot less, but in that case I would be closer to an expert myself.

To give another example, a mathematically inclined high school student may see a mathematical proof, with each step laid out before them in detail. The high school student may have the opportunity to analyze every step to look for potential problems in the proof and see none. Then, a mathematician may come along, glance over the proof, and say that it is invalid. Who are you going to believe?

In some cases, we are the high school student. We can stare at all the raw facts (the details of the proof) and they all make sense to us and we feel very strongly that we can draw a certain inference from them. And yet, we are unaware of what we don't know that the expert does know. Or the expert is simply better at reasoning in these kinds of problems, or avoiding falling into logical traps that sound valid but are not.

Of course, the more you know about the expert's arguments, the less their authority counts. But sometimes, the expertise lies in the ability to correctly process the type of facts at hand. If a mathematician's argument about the invalidness of step 3 does not seem convincing to you, and your argument about why step 3 is valid seems totally convincing, you should still at least hesitate in concluding you are correct.

Comment by clockbackward on Debate tools: an experience report · 2010-02-07T16:36:16.065Z · LW · GW

A point about using diagrams to make arguments: If you are attempting to convince a person that something is true, rather than just launching into your evidence and favorite arguments it is often most efficient to begin by asking a series of questions to determine precisely how the person disagrees with you. The questioning allows you to hone in on the most important sticking points that prevent the other party from coming to your conclusion. These points can then be attacked individually, preventing you from wasting time making arguments that the other party already agree with or refuting positions that he or she has never even considered. The reason that this relates to diagrams is that this method of argumentation can be viewed as a tree, with questions at each of the higher level branches, and arguments at the leaf nodes.

Comment by clockbackward on Rationality Quotes: February 2010 · 2010-02-01T15:06:30.289Z · LW · GW

"In my experience, the most staunchly held views are based on ignorance or accepted dogma, not carefully considered accumulations of facts. The more you expose the intricacies and realities of the situation, the less clear-cut things become."

Mary Roach - from her book Spook

Comment by clockbackward on Bizarre Illusions · 2010-01-28T00:29:44.562Z · LW · GW

A side note: The only reason that prime numbers are defined in such a way as to exclude 1 and negative numbers is because mathematicians found this way of defining them a bit more useful than the alternative possibilities. Mathematicians generally desire for important theorems to be stated in a manner that is as simple as possible, and the theorems about primes are generally simpler if we exclude 1. There is a more detailed analysis of this question here:

http://www.askamathematician.com/?p=1269

Comment by clockbackward on The Wannabe Rational · 2010-01-27T02:52:09.580Z · LW · GW

Anyone who claims to be rational in all area of their lives is speaking with irrational self confidence. The human brain was not designed to make optimal predictions from data, or to carry out flawless deductions, or to properly update priors when new information becomes available. The human brain evolved because it helped our ancestors spread their genes in the world that existed millions of years ago, and when we encounter situations that are too different from those that we were built to survive in, our brains sometimes fail us. There are simple optical illusions, simple problems from probability, and simple logic puzzles that cause brain failings in nearly everyone.

Matters are even worse than this though, because the logical systems in our brain and the emotional ones can (and often do) come to differing conclusions. For example, people suffering from phobia of spiders know perfectly well that a realistic plastic spider cannot hurt them, and yet a plastic spider likely will terrify them, and may even send them running for their lives. Similarly, some theists have come to the conclusion that they logically have no reason to believe in a god, and yet the emotional part of the brain still fills them with the feeling of belief. I personally know one unusually rational person who admits to being just like this. I have even discussed with her ways in which she might try to bring her emotions in line with her reasoning.

So does one irrational belief discredit someone from being a rationalist? Not at all. We all have irrational beliefs. Perhaps a more reasonable definition of a rationalist would be someone who actively seeks out their irrationalities and attempts to eradicate them. But identifying our own irrationalities is difficult, admitting to ourselves that we have them is difficult (for rationalists, anyway), removing them is difficult, and overcoming the emotional attachment we have to them is sometimes the most difficult part of all.

Comment by clockbackward on Tips and Tricks for Answering Hard Questions · 2010-01-25T16:57:20.607Z · LW · GW

Some further suggestions for handling hard questions, gleaned from work done in mathematics:

  1. Hard questions can often be decomposed into a number of smaller not quite as hard (or perhaps even easy) questions whose answers can be strung together to answer the original question. So often a good first step is trying to decompose the original question in various ways.

  2. Try and find a connection between the hard question and ones that people already know how to answer. Then, see if you can figure out what it would take to bridge the gap between the hard question and what has been answered. For example, if the hard question you are trying to answer relates to human consciousness, perhaps a (not entirely ridiculous) approach would be to first examine questions that researchers have already made headway with, like the neural correlates to consciousness, and then focus on solving the problem by thinking about how one could go from a theory of correlates to a theory of consciousness (maybe this is impossible, but then again maybe it is not). This sort of approach can be a lot faster than solving a problem from scratch, both because it can avoid requiring you to reinvent the wheel, and because sometimes linking a problem to ones that are already solved is a lot easier than solving those problems to begin with.

  3. Don't become attached to your first ideas. If you've had some great ideas that have gotten you close to solving a hard problem, but after a lot of work you still aren't where you want to be, don't get stuck forever in what could be a dead end. From time to time, try to refresh your perspective by starting over from scratch. Often people find it painful starting over again, or are so excited by their first promising ideas that they don't want to let them go, but when a problem is truly hard you may well need to restart the problem again and again before hitting on an approach that really will work. This is a bit like reseeding a random number generator.

  4. Discuss the problem with other very smart people (even if they are not experts in precisely what you are doing) and listen closely to what they have to say. You never know when someone will say something that will trigger a great idea, and the process of explaining what you are working on can cause you to gain a new understanding of the subject or, at least, force you to clarify your thinking.

Comment by clockbackward on The Prediction Hierarchy · 2010-01-24T20:56:37.454Z · LW · GW

I believe that the analysis of this problem can be made more mathematically rigorous than is done in this post. Not only will a formal analysis help us avoid problem's in our reasoning, but it will clearly illustrate what assumptions have been made (so we can question their legitimacy).

Let's assume (as is done implicitly in the post) that you know with 100% certainty that the only two possible payouts are $1 million and $0. Then:

expected earnings = p($1 million payout) $1 million + p($0 payout) $0 - (ticket price)

= p($1 million payout) * $1 million - (ticket price)

= p($1 million payout|correctly computed odds) p(correctly computed odds) * $1 million

  • p($1 million payout|incorrectly computed odds) p(incorrectly computed odds) * $1 million
  • (ticket price)

= (1/40,000,000) p(correctly computed odds) * $1 million

  • p($1 million payout|incorrectly computed odds) (1 - p(correctly computed odds)) * $1 million
  • (ticket price)

We note now that we can write:

p($1 million payout|incorrectly computed odds) (1 - p(correctly computed odds)) $1 million = p($1 million payout|incorrectly computed odds) $1 million (1 - p(correctly computed odds)) = (p($1 million payout|incorrectly computed odds) $1 million + p($0 payout|incorrectly computed odds) $0) (1 - p(correctly computed odds)) = (expected payout given incorrectly computed odds) (1 - p(correctly computed odds))

Hence, our resulting equation is:

expected earnings = (1/40,000,000) p(correctly computed odds) * $1 million

  • (expected payout given incorrectly computed odds) (1 - p(correctly computed odds))
  • (ticket price)

Now, under the fairly reasonable (but not quite true) assumption (which seems to be implicitly made by the author) that

(expected payout given incorrectly computed odds) = (expected payout given that we know nothing except that we are dealing with a lotto that costs (ticket price) to play)

we can convert to the notation of the article, which gives us:

E(L) = p(C) p(L) j + (1 - p(C)) * (e + t) - t

Here I have interpreted e as the expected value given that we are dealing with a lotto that we know nothing else about (rather than expected earnings under those circumstances). The author describes e as an "expected payoff" but I don't think that is really quite what was meant (unless "payoff" returns to total net payoff including the ticket price).

We can now rearrange this formula:

E(L) = p(C) p(L) j + (1 - p(C)) e + (1 - p(C)) t - t = p(C) p(L) j + (1 - p(C)) e + (1 - p(C)) t - t = p(C) p(L) j + (1 - p(C)) e - p(C) t = p(C) ( p(L) j - t) + (1 - p(C)) e

which finally gets us to the author's terminal formula.

What is the point of doing this careful, formal analysis? Well, we now see where the author's formula comes from explicitly, it is proven rigorously, and we are fully aware of what assumptions were made. The assumptions are:

  1. You know with 100% certainty that the only two possible payouts are $1 million and $0

and

  1. expected payout given incorrectly computed odds = expected payout given that we know nothing except that we are dealing with a lotto that costs the given ticket price to play

The first assumption is reasonable assuming that lotto is not fraudulent, you don't have problems reading the rules, it is not possible for multiple people to claim the payout, etc.

The second assumption, however, is harder to justify. There are many ways that a calculation of odds could go wrong (putting a decimal point in the wrong place, making a multiplication error, unknowingly misunderstanding the laws of probability, actually being insane, etc.) If we could really enumerate all of them, understand how they effect our computed payout probability, and estimate the probability of each occurring, then we could compute this missing factor exactly. As things stand though, it is probably untenable. It should not be expected though that errors that make the payout probability artificially larger will balance those that make it artificially smaller. Misplacing a decimal point, for example, will almost certainly be noticed if it leads to a percentage greater than 100%, but not if it leads to one that is less than that (creating an asymmetry).

Comment by clockbackward on High Status and Stupidity: Why? · 2010-01-14T20:57:57.441Z · LW · GW

I would like to add another reason why we might perceive high status individuals as being less intelligent (or talented) than they originally seemed. The effect under consideration is reversion to the mean. Often, a person gains high status (or, at least meaningfully begins the climb to having high status) as a result of one exceptional act or creation or work. If our average skill level is X, we may often produce works that require skill close to X, but occasionally produce works that require much greater or much less skill than X (due to natural variability or variance in our performance). We are much more likely to be recognized (gain high status) when we happen to produce something far above our own skill level than when we create something right near our skill level (due simply to the fact that works above our skill level are of higher quality than most of what we create, and quality productions are more likely to be recognized). Hence, you might expect that many famous people's works that got them noticed (whether it is a novel, movie, essay, business deal, or what have you) may actually be better than would be expected from their average skill level. Hence, future work will seem less good by comparison.

This same effect might partly explain why highly anticipated movie sequels are in many cases not as good as the originals. The creators of the original may well have produced a work significantly above their average skill level (which made the movie more likely to become famous in the first place because so much skill was required), whereas the sequel will likely be closer to their true level!

For more about the statistical effect, google "Reversion to the Mean".

Comment by clockbackward on Are wireheads happy? · 2010-01-10T17:45:29.500Z · LW · GW

Perhaps it is true that our modest technology for altering brain states (simple wireheading, recreational drugs, magnetic stimulation, etc.) leads only to stimulation of the "wanting" centers of the brain and to simple (though at times intense) pleasurable sensations. On the other hand though, it seems almost inevitable that as the secrets of the brain are progressively unlocked, and as our ability to manipulate the brain grows, it will be possible to generate all sorts of brain states, including those "higher" ones associated with love, accomplishment, fulfillment, joy, religious experiences, insight, bliss, tranquility and so on. Hence, while your analysis appears to be quite relevant with regard to wireheading today, I am skeptical that it is likely to apply much to the brain technology that could exist 50 years from now.

Comment by clockbackward on Will reason ever outrun faith? · 2010-01-10T16:23:24.534Z · LW · GW

Sure, many people treat technology like magic, but as it becomes an ever increasing part our our lives, it is hard to deny that the supply of jobs in science and engineering will increase, and subsequently that the number of scientists and engineers will grow to meet this demand. What is more, even if most people are not curious about the technology they grow up with, that does not preclude the possibility that increased technology correlates with increased interest in science. All it would take is one in 10 or even 1 in 20 people to be influenced by the technology they use.

Comment by clockbackward on A Suite of Pragmatic Considerations in Favor of Niceness · 2010-01-09T21:52:49.303Z · LW · GW

Being mean to someone who is not themselves being mean or manipulative is often not just counterproductive and self destructive (due to the reasons you mentioned), but also the result of personal weakness and lack of control. Meanness usually results from one of the following situations:

  1. We feel angry and speak impulsively, entranced by our emotion.
  2. We speak carelessly and don't realize the potential emotional consequences of what we are saying.
  3. We are consciously and knowingly trying to hurt another person.

In case 1, our anger reflects a personal weakness in that our emotion prevents us from behaving level headily and making sure that what we say really promotes our interests. In case 2 we are speaking without awareness of the consequences of our actions, and hence again put our interests at risk. Only case 3 has a shot of being (at certain times) a good strategy, but in that case we must ask why we are consciously and knowingly choosing to hurt another human being, and whether such an action is ethical and justified.

Comment by clockbackward on Communicating effectively: form and content · 2010-01-09T19:23:04.376Z · LW · GW

Unfortunately, the arguments that are most convincing to human minds are often not the most logical or the best supported by evidence. To be as convincing as possible, one must appeal to the emotional, as well as the rational aspects of the brain. Arguments are unlikely to succeed when your audience is put on the defensive or made to feel as though their world view is under attack, since this will trigger emotional states. Anger, annoyance and resentment affect the proper functioning of our logical abilities (think of what happens when you try reasoning with a person who is upset, or of the stupid decisions made in "crimes of passion"), and hence will damage the effectiveness of your argument. When discussing controversial topics, it is important (though quite difficult) to make your points without emotionally arousing the reader. Hence, one reason that it is important to display niceness in your writing is that it makes it less likely that you will annoy your reader.

Comment by clockbackward on Will reason ever outrun faith? · 2010-01-09T17:22:56.170Z · LW · GW

Atheism has some properties that religion does not that may allow it to spread rapidly under certain cultural conditions that likely will exist in the future. For example, as technology continues to play a larger and larger role in our lives (and continues to spread to the poorest countries), that may well correlate with an increase in the respect for and interest in science that people have, as well as the number of people trained in scientific fields. As science tends to directly contradict many religious stories (such as the creation of the world in Genesis), and since levels of religious belief among scientists are generally much lower than among the population at large, that may increase the rate at which atheism spreads.

Comment by clockbackward on Case study: Melatonin · 2010-01-09T17:04:34.935Z · LW · GW

Your argument essentially amounts to the following:

  1. Melatonin significantly improves sleep quality.
  2. It has no side effects.
  3. It has low cost.

If all of these are true, then who wouldn't want to take it? However, you spend a lot of time on discussing point 3, but little on points 1 and 2, which are arguably the most important. How do you know that Melatonin really improves sleep quality so much? Is it just based on your personal experience (and perhaps that of other people you know)? If so, that is not convincing, as large scale randomized controlled studies are generally the only way to reliably tell if a medicine works. There are too many complicating factors like individual differences between people, the placebo effect, random fluctuation, reversion to the mean, difficulty in remembering how we felt in the past, etc. to rely on anecdotes.

Another point that your article does not address is the fact that there is a difference between a medicine having no known side effects, and a medicine ACTUALLY having no side effects. Any time that you take medicine you are taking a risk of a reaction that is unknown, or which failed to be uncovered in any studies that were done on it. For example, it is probably unknown whether a decade of Melatonin use (rather than just one or two years) causes problems of any kind. This sort of danger is unfortunately difficult to quantify, but I believe deserves at least some mention.

Comment by clockbackward on Reference class of the unclassreferenceable · 2010-01-09T00:03:32.362Z · LW · GW

It is not a good idea to try and predict the likelihood of the emergence of future technologies by noting how these technologies failed to emerge in the past. The reason is that cryonics, singularities, and the like, are very obviously more likely to exist in the future than they were in the past (due to the invention of other new technologies), and hence the past failures cease to be relevant as the years pass. Just prior to the successful invention of most new technologies, there were many failed attempts, and hence it would seem (looking backward and applying the same reasoning) that the technology is unlikely ever to be possible.