Posts

Take heed, for it is a trap 2011-08-14T10:23:45.930Z
Those who can't admit they're wrong 2011-07-01T05:09:50.738Z

Comments

Comment by Zed on [META] 'Rational' vs 'Optimized' · 2012-01-07T09:19:04.651Z · LW · GW

I think "strategy" is better than "wisdom". I think "wisdom" is associated with cached Truths and signals superiority. This is bad because this will make our audience too hostile. Strategy, on the other hand, is about process, about working towards a goal, and it's already used in literature in the context of improving one's decision making process.

You can get away with saying things like "I want to be strategic about life", meaning that I want to make choices in such a way that I'm unlikely to regret them at a later stage. Or I can say "I want to become a more strategic thinker" and it's immediately obvious that I care about reaching goals and that I'm not talking about strategy for the sake of strategy (I happen to care about strategy because of the virtue of curiosity, but this too is fine). The list goes on: "we need to reconsider our strategy for education", "we're not being strategic enough about health care -- too many people die unnecessarily". None of these statements put our audience on guard or make us look like unnatural weirdos. [1]

The most important thing is that "irrational" is perceived as an insult and way too close to the sexist emotional/hormonal used to dismiss women. Aside from the sexism saying "whatever, you're just being irrational" is just as bad as saying "whatever, you're just being hormonal". It's the worst possible thing to say, and when you have a habit of using the word "rational" a lot it's way too easy to slip up.

[1] fun exercise - substitute "strategy" by "rationality" and see how much more Spock-like it all sounds.

Comment by Zed on Describe your personal Mount Stupid · 2012-01-03T22:52:43.920Z · LW · GW

That comic is my source too. I just never considered taking it at face value (too many apparent contradictions). My bad for mind projection.

Comment by Zed on Describe your personal Mount Stupid · 2012-01-03T22:29:31.419Z · LW · GW

Does Mount Stupid refer to the observation that people tend to talk loudly and confidently about subjects they barely understand (but not about subjects they understand so poorly that they know they must understand it poorly)? In that case, yes, once you stop opining the phenomenon (Mount Stupid) goes away.

Mount Stupid has a very different meaning to me. To me it refers to the idea that "feeling of competence" and "actual competence" are not linearly correlated. You can gain a little in actual competence and gain a LOT in terms of "feeling of competence". This is when you're on Mount Stupid. Then, as you learn more your feeling of competence and actual competence sort of converge.

The picture that takes "Willingness to opine" on the Y-axis is, in my opinion, a funny observation of the phenomenon that people who learn a little bit about a subject become really vocal about it. It's just a funny way to visualize the real insight (Δ feeling of competence != Δ competence) in a way that connects to people because we can probably all remember when we made that specific mistake (talking confidently about a subject we knew little about).

Comment by Zed on Describe your personal Mount Stupid · 2012-01-03T22:04:14.535Z · LW · GW

I don't think so, because my understanding of the topic didn't improve -- I just don't want to make a fool out of myself.

I've moved beyond mount stupid on the meta level, the level where I can now tell more accurately whether my understanding of a subject is lousy or OK. On the subject level I'm still stupid, and my reasoning, if I had to write it down, would still make my future self cringe.

The temptation to opine is still there and there is still a mountain of stupid to overcome, and being aware of this is in fact part of the solution. So for me Mount Stupid is still a useful memetic trick.

Comment by Zed on Describe your personal Mount Stupid · 2012-01-03T21:29:11.628Z · LW · GW
  1. Macroeconomics. My opinion and understanding used to be based on undergrad courses and a few popular blogs. I understood much more than the "average person" about the economy (so say we all) and therefore believed that I my opinion was worth listening to. My understanding is much better now but I still lack a good understanding of the fundamentals (because textbooks disagree so violently on even the most basic things). If I talk about the economy I phrase almost everything in terms of "Economist Y thinks X leads to Z because of A, B, C.". This keeps the different schools of economics from blending together in some incomprehensible mess.

  2. QM. Still on mount stupid, and I know it. I have to bite my tongue not to debate Many Worlds with physics PhDs.

  3. Evolution. Definitely on mount stupid. I know this because I used to think "group pressure" was a good argument until EY persuaded me otherwise. I haven't studied evolution since so I must be on mount stupid still.

Aside from being aware of the concept of Mount Stupid I have not changed my behavior all that much. If I keep studying I know I'm going to get beyond Mount Stupid eventually. The faster I study the less time I spend on top of mount stupid and the less likely I am to make a fool out of myself. So that's my strategy.

I have become much more careful about monitoring my own cognitive processes: am I saying this just to win the argument? Am I looking specifically for arguments that support my position, and if so, am I sure I'm not rationalizing? So in that respect I've improved a little. It's probably the most valuable sort of introspection that typical well educated and intelligent people lack.

One crucial point about Mount Stupid that hasn't been mentioned here yet is that it applies every time you "level up" on a subject. Every time you level up on a subject you're at a new valley with a Mount Stupid you have to cross. You can be an expert frequentist rationalist but a lousy Bayesian rationalist, and by learning a little about Bayesianism you can become stupider (because you're good at distinguishing good vs bad frequentist reasoning but you can't tell the difference for Bayes (and if you don't know you can't tell the difference you're also on Meta Mount Stupid)).

Comment by Zed on Stupid Questions Open Thread · 2011-12-30T02:22:36.907Z · LW · GW

[a "friendly" AI] is actually unFriendly, as Eliezer uses the term

Absolutely. I used "friendly" AI (with scare quotes) to denote it's not really FAI, but I don't know if there's a better term for it. It's not the same as uFAI because Eliezer's personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).

Comment by Zed on Stupid Questions Open Thread · 2011-12-30T01:58:45.765Z · LW · GW

Game theory. If different groups compete in building a "friendly" AI that respects only their personal extrapolated coherent violation (extrapolated sensible desires) then cooperation is no longer an option because the other teams have become "the enemy". I have a value system that is substantially different from Eliezer's. I don't want a friendly AI that is created in some researcher's personal image (except, of course, if it's created based on my ideals). This means that we have to sabotage each other's work to prevent the other researchers to get to friendly AI first. This is because the moment somebody reaches "friendly" AI the game is over and all parties except for one lose. And if we get uFAI everybody loses.

That's a real problem though. If different fractions in friendly AI research have to destructively compete with each other, then the probability of unfriendly AI will increase. That's real bad. From a game theory perspective all FAI researchers agree that any version of FAI is preferable to uFAI, and yet they're working towards a future where uFAI is becoming more and more likely! Luckily, if the FAI researchers take the coherent extrapolated violation of all of humanity the problem disappears. All FAI researchers can work to a common goal that will fairly represent all of humanity, not some specific researcher's version of "FAI". It also removes the problem of different morals/values. Some people believe that we should look at total utility, other people believe we should consider only average utility. Some people believe abstract values matter, some people believe consequences of actions matter most. Here too the solution of an AI that looks at a representative set of all human values is the solution that all people can agree on as most "fair". Cooperation beats defection.

If Luke were to attempt to create a LukeFriendlyAI he knows he's defecting from the game theoretical optimal strategy and thereby increasing the probability of a world with uFAI. If Luke is aware of this and chooses to continue on that course anyway then he's just become another uFAI researcher who actively participates in the destruction of the human species (to put it dramatically).

We can't force all AI programmers to focus on the FAI route. We can try to raise the sanity waterline and try to explain to AI researchers that the optimal (game theoretically speaking) strategy is the one we ought to pursue because it's most likely to lead to a fair FAI based on all of our human values. We just have to cooperate, despite differences in beliefs and moral values. CEV is the way to accomplish that because it doesn't privilege the AI researchers who write the code.

Comment by Zed on two puzzles on rationality of defeat · 2011-12-12T18:35:33.046Z · LW · GW

If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cognitively incapacitated.

So as you come into contact with evidence that contradicts what you believe you become less certain your belief is correct, and as you come into contact with evidence that confirms what you believe you become more confident your belief is correct. Apply Bayes' rules for this (for links to Bayes and Bayesian reasoning see other comments in this thread).

I've just read a couple of pages of Defeasible Reasoning by Pollock and it's a pretty interesting formal model of reasoning. Pollock argues, essentially, that Bayesian epistemology is incompatible with deductive reasoning (pg 15). I semi-quote: "[...] if Bayesian epistemology were correct, we could not acquire new justified beliefs by reasoning from previously justified beliefs" (pg 17). I'll read the paper, but this all sounds pretty ludicrous to me.

Comment by Zed on New 'landing page' website: Friendly-AI.com · 2011-12-12T15:43:08.260Z · LW · GW

Looks great!

I may be alone in this, and I haven't mentioned this before because it's a bit of a delicate subject. I assume we all agree that first impressions matter a great deal, and that appearances play a large role in that. I think that, how to say this, ehm, it would, perhaps, be in the best interest of all of us, if you could use photos that don't make the AI thinkers give off this serial killer vibe.

Comment by Zed on two puzzles on rationality of defeat · 2011-12-12T15:14:13.961Z · LW · GW

I second Manfred's suggestion about the use of beliefs expressed as probabilities.

In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should beliefs defeat each other in this manner? No. Is it rational? No. Does the order in which you're exposed to evidence matter? No.

In puzzle (2) the subject is certain a proposition is true (even though he's still free to change his mind!). However, accepting contradicting evidence leads to confusion (as in puzzle 1), and to mitigate this the construct of "Misleading Evidence" is introduced that defines everything that contradicts the currently held belief as Misleading. This obviously leads to Status Quo Bias of the worst form. The "proof" that comes first automatically defeats all evidence from the future, therefore making sure that no confusion can occur. It even serves as a Universal Counterargument ("If that were true I'd believe it and I don't believe it therefore it can't be true"). This is a pure act of rationalization, not of rationality.

*) meaning that you're not completely confident of T and ~T.

Comment by Zed on Is an Intelligence Explosion a Disjunctive or Conjunctive Event? · 2011-11-14T14:53:47.869Z · LW · GW

My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we're boned. Especially when we don't know exactly what we're doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the concept of a singularity.

that the most pressing issue is to increase the confidence into making decisions under extreme uncertainty or to reduce the uncerainty itself.

I disagree, it's not the most pressing issue. In a sufficiently complex system there are always going to be vectors we poorly understand. The problem here is that we have a global society where it becomes harder every year for a single part to fail independently of the rest. A disease or pathogen is sure to spread to all parts of the world, thanks to our infrastructure. Failure of the financial markets affect the entire world because the financial markets too are intertwined. Changes in the climate also affect the entire globe, not just the countries who pollute. An unfriendly AI cannot be contained either. Everywhere you look there are now single points of failure. The more connected our world becomes the more vulnerable we become to black swan events that rock the world. Therefore, the more cautious we have to be. The strategy we used in the past 100.000 years (blindly charge forward) got us where we are today but it isn't very good anymore. If we don't know exactly what we're doing we should make absolutely sure that all worst case scenarios affect only a small part of the world. If we can't make such guarantees then we should probably be even more reluctant to act at all. We must learn to walk before we can run.

Under extreme uncertainty we cannot err on the side of caution. We can reduce uncertainty somewhat (by improving our estimates) but there is no reason to assume we will take all significant factors into account. If you start out with a 0.001 probability of killing all of humanity there is no amount of analysis that can rationally lead to the conclusion "eh, whatever, let's just try it and see what happens", because the noise in our confidence will exceed a few parts in a million at the least, which is already an unacceptable level of risk. It took billions of years for evolution to get us to this point. We can now mess it up in the next 1000 years or so because we're in such a damn hurry. That'd be a shame.

Comment by Zed on Selection Effects in estimates of Global Catastrophic Risk · 2011-11-08T15:26:20.823Z · LW · GW

From the topic, in this case "selection effects in estimates of global catastrophic risk". If you casually mention you don't particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn't matter that you don't wish to do anybody harm.

Comment by Zed on Query the LessWrong Hivemind · 2011-11-08T14:20:53.807Z · LW · GW

Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.

Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.

Probability: Reading the sequences is a sound investment for G (compared to other activities)

Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.

Comment by Zed on Query the LessWrong Hivemind · 2011-11-08T13:47:50.248Z · LW · GW

Ah, you're right. Thanks for the correction.

I edited the post above. I intended P(Solipsism) < 0.001

And now I think a bit more about it I realize the arguments I gave are probably not "my true objections". They are mostly appeals to (my) intuition.

Comment by Zed on Selection Effects in estimates of Global Catastrophic Risk · 2011-11-08T13:17:58.577Z · LW · GW

You shouldn't do it because it's an invitation for people to get sidetracked. We try to avoid politics for the same reason.

Comment by Zed on Query the LessWrong Hivemind · 2011-11-08T12:52:17.098Z · LW · GW

P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don't think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom's simulation argument is sound.

1 - P(Solipsism) > 0.999; My mind doesn't contain minds that are consistently smarter than I am and can out-think me on every level.

P(Dreaming) < 0.001; We don't dream of meticulously filling out tax forms and doing the dishes.

[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]

Comment by Zed on Rhetoric for the Good · 2011-10-27T01:50:55.514Z · LW · GW

Anything by Knuth.

E.g. http://cs.utsa.edu/~wagner/knuth/

Comment by Zed on More shameless ploys for job advice · 2011-10-07T19:15:32.895Z · LW · GW

I know several people who moved to Asia to work on their internet startup. I know somebody who went to Asia for a few months to rewrite the manuscript of a book. In both cases the change of scenery (for inspiration) and low cost of living made it very compelling. Not quite the same as Big Thinking, but it's close.

Comment by Zed on More shameless ploys for job advice · 2011-10-06T06:24:15.107Z · LW · GW

I'm flattered, but I'm only occasionally coherent.

Comment by Zed on More shameless ploys for job advice · 2011-10-06T06:08:01.973Z · LW · GW

When you say "I have really thought about this a considerable amount", I hear "I have diagnosed the problem quite a while ago and it's creating a pit in my stomach but I haven't taken any action yet". I can't give you any points for that.

When you're dealing with a difficult problem and if you're an introspective person it's easy to get stuck in a loop where you keep going through the same sorts of thoughts. You realize you're not making much progress but the problem remains so you feel obligated to think about it some more. You should think more, right? It's an important decision after all?

Nope. Thinking about the problem is not a terminal goal. Thinking is only useful insofar it leads to action. And if your thinking to action ratio is bad, you'll get mentally exhausted and you'll have nothing to show for it. It leads to paralysis where all you do is think and think and think.

If you want to make progress you have to find a way to decompose your problem into actionable parts. Not only will action make you feel better, it's also going to lead to unexplored territory.

So what kind of actions can you take?

Well, your claim is that major conferences require short term commercial papers. So if you go systematically through the papers published in the last year or so you'll find either (a) all the papers are boring, stupid, silly or wrong. (b) there are a bunch of really cool papers in there. In case of (a) maybe you're in the wrong field of research. Maybe you should go into algorithms or formal semantics. In this case look at other computer science papers until you find papers that do excite you. In case of (b) contact the authors of the papers; check out their departments; etc, etc.

To recap: Find interesting papers. Find departments where those interesting papers were written. Contact those departments.

Another strategy. Go to the department library and browse through random books that catch your eye. This is guaranteed to give you inspiration.

This is just from the top of my head. But whatever you do, make sure that you don't just get stuck in a circle of self-destructive thought. Action is key.

If you're certain you want to eventually get a faculty job, do a combination of teaching and research, own a house and regularly go on holiday, then I can't think of any alternatives to the conventional PhD -> faculty route. What's the best way to achieve a faculty job? I don't know. Probably a combination of networking, people skills and doing great research. If you want a faculty job badly enough you can get one. But once you get it there's no guarantee you're going to be happy if what you really want is complete autonomy.

I'm sorry I can't give any targeted advice.

(PS: some people like the idea of travel more than they like travel and some people like the idea of home-ownership more than they like home-ownership. For instance, if you haven't traveled a lot in the past 5 years you probably don't find travel all that important (otherwise you would've found a way to travel).)

Comment by Zed on More shameless ploys for job advice · 2011-10-06T04:44:50.020Z · LW · GW

As far as I can tell you identify two options: 1) continue doing the PhD you don't really enjoy 2) get a job you won't really enjoy.

Surely you have more options!

3) You can just do a PhD in theoretical computer vision at a different university.

4) You can work 2 days a week at a company and do your research at home for the remaining 4 days

5) Become unemployed and focus on your research full time

6) Save some money and then move to Asia, South America or any other place with very low cost of living so you can do a few years of research full time.

7) Join a startup company that is doing groundbreaking computer vision work

8) See if there is something else that you can be passionate about and do that.

Life's too short to do something you don't enjoy and you're now at a point in your life where the decisions you make are going to have real consequences. So do some soul searching and figure out what you really want and then figure out what you have to do to make it happen. That's life 101.

When you're spending the majority of your time doing something you don't really enjoy you have a big problem. This is the only life you have and it's easy to waste it 5 years at the time! Maybe your true dream is to work on the next Pixar movie, or to design special effects for the next CGI blockbuster! But if you aren't going to explore your options seriously you're not going to find out what you really want to do in life. If, on the other hand, you're absolutely sure you want to do theoretical computer vision research, then JUST DO THAT. There are thousands of universities with good computer vision departments. So unless you got a 1000 rejection letters on your desk you haven't even seriously explored your options yet.

(PS: Forget about doing research in the evenings after you get home from a day job. It doesn't work. Many people do this and then they figure out that after a full day's work you don't have the energy anymore to do really difficult stuff. Your lifestyle will change and you'll grow dependent on your job. Then as you get older you'll look back and call it a "silly dream" and wisely observe that you have to make compromises in life and that your ability to compromise on what you want makes you a responsible adult.)

(PPS: I'm trying to convey that being unhappy with your job should trigger "hair on fire" like panic.)

Comment by Zed on Repairing Yudkowsky's anti-zombie argument · 2011-10-05T15:28:09.463Z · LW · GW

Thanks for the clarifications.

Honestly, I don't have a clear picture of what exactly you're saying ("qualia supervene upon physical brain states"?) and we would probably have to taboo half the dictionary to make any progress. I get the sense you're on some level confused or uncomfortable with the idea of pure reductionism. The only thing I can say is that what you write about this topic has a lot of surface level similarities with the things people write when they're confused.

Comment by Zed on Repairing Yudkowsky's anti-zombie argument · 2011-10-05T14:35:26.226Z · LW · GW

Just to clarify, does "irreducible" in (3) also mean that qualia are therefore extra-physical?

I assume that we are all in agreement that rocks do not have qualia and that dead things do not have qualia and that living things may or may not have qualia? Humans: yes. Single cell prokaryotes: nope.

So doesn't that leave us with two options:

1) Evolution went from single cell prokaryotes to Homo Sapiens and somewhere during this period the universe went "plop" and irreducible qualia started appearing in some moderately advanced species.

2) Qualia are real and reducible in terms of quarks like everything else in the brain. As evolution produced better brains at some point it created a brain with a minor sense of qualia. Time passed. Brains got better and more introspective. In other words: qualia evolved (or "emerged") like our sense of smell, our eyesight and so forth.

Comment by Zed on [Funny] Even Clippy can be blamed on the use of non-Bayesian methods · 2011-10-03T01:36:58.347Z · LW · GW

My first assumption is that almost everything you post is seen as (at least somewhat) valuable (for almost every post #upvotes > #downvotes), so the net karma you get is mostly based on throughput. More readers, more votes. More votes, more karma.

Second, useful posts do not only take time to write, they take time to read as well. And my guess is that most of us don't like to vote on thoughtful articles before we have read them. So for funny posts we can quickly make the judgement on how to vote, but for longer posts it takes time.

Decision fatigue may also play a role (after studying something complex the extra decision of whether to vote on it feels like work so we skip it). People may also print more valuable texts, or save them for later, making it easy to forget to vote.

The effect is much more evident on other karma based sites. Snarky one-liners and obvious puns are karma magnets. LessWrong uses the same system and is visited by the same species and therefore suffers from the same problems, just to a lesser extent.

Comment by Zed on [gooey request-for-help] I don't know what to do. · 2011-09-26T14:38:16.466Z · LW · GW

All the information you need is already out there, and I have this suspicion you have probably read a good deal of it. You probably know more about being happy than everybody else you know and yet you're not happy. You realize that if you're a smart rational agent you should just be able to figure out what you want to do and then just do it, right?

  1. figure out what makes you happy
  2. do more of those things
  3. ???
  4. happiness manifests itself

There is no step (3). So why does it feel more complex than it really is?

What is the kind of response you're really looking for when you start this topic? Do you (subconsciously) want people to just tell you to buck up and deal with it? Do you (subconsciously) want people to tell you not to worry and that it's all going to be alright? Or are you just in some kind of quarter-life crisis because you don't really see clearly where you're going with your life and the problems you have are just side-effects of that?

  • Maybe you need to be held accountable for your actions?

  • Maybe you need additional responsibility?

  • Maybe you need a vacation?

  • Maybe you need to grow as a person in another manner?

We can't answer these questions for you and you know we can't answer these questions for you. Yet you ask us anyway. It doesn't make sense.

Now, I can make a complete shot-in-the-dark guess about your situation and make the following assumptions:

  1. you're in social isolation
  2. you spend much time on intellectual issues
  3. you're not producing, you're almost exclusively consuming intellectual stuff
  4. you're not eating as well as you should
  5. you're letting lazy habits chip away at your life on the edges
  6. you tell yourself that there's nothing wrong with you and that you should just man up
  7. you hate the fact that you procrastinate and yet you keep procrastinating
  8. every time when you feel you're making progress it doesn't last and you regress every time to square one.
  9. you have trouble making lasting changes in every single aspect of your life

Psychological help doesn't work because you don't need people to explain this stuff to you, you've done your homework already and you know all this.

I'd be happy to talk to you over skype if you want, we can talk about whatever you want to talk about. For some people talking about their problems really helps, especially if they otherwise bottle it all up.

What is the opposite of happiness? Sadness? No. Just as love and hate are two sides of the same coin, so are happiness and sadness. Crying out of happiness is a perfect illustration of this. The opposite of love is indifference, and the opposite of happiness is - here's the clincher - boredom...

The question you should be asking isn't 'What do I want?' or 'What are my goals?' but 'What would excite me?'

Remember - boredom is the enemy, not some abstract 'failure.' (T. Ferris)

Comment by Zed on Book trades with open-minded theists - recommendations? · 2011-08-31T21:57:22.134Z · LW · GW

Questions about deities must fade away just like any other issue fades away after it's been dissolved.

Compartmentalization is the last refuge for religious beliefs for an educated person. Once compartmentalization is outlawed there is no defense left. The religious beliefs just have to face a confrontation of the rational part of the brain and then the religious beliefs will evaporate.

If somebody has internalized the sequences they must (at least):

  1. be adapt at reductionism,
  2. be comfortable with Bayes and MML, Kolmogorov complexity,
  3. be acutely aware of which cognitive processes are running.

If you habitually ask yourself "Why am I feeling this way?", "Am I rationalizing right now?", "Am I letting my ego get in the way of admitting I'm wrong?", "Did I just shift the goal post?", "Did I make the fundamental attribution error?", "Is this a cached thought?" and all those other questions you become very good at telling which feeling corresponds to which of those cognitive mistakes.

So, let's assume that for these reasons the theist at least comes to this point where he realizes his earlier reasoning was unsound and decides to honestly re-evaluate his position.

A typical educated person who likes a belief he will continue to believe it until he's proven wrong (he's a reasonable person after all). If he doesn't like a belief he will reject it until the evidence is so overwhelming he has no choice but to accept it (he's open minded after all!). This is a double standard where the things you believe are dominated by whichever information enters your brain first.

The next step is to realize that religious beliefs are essentially just an exercise in privileging the hypothesis. If you take a step back and look at the data and try to go from there to the best hypothesis that conforms to the data there's just no way you're going to arrive at Hinduism, Taoism, Christianity or any other form of spirituality. All those holy books contain thousands of claims each of significant Kolmogorov complexity. We're dealing with a prior of 2 ^ -100000 at the least. Only if you start out looking for evidence for a specific holy book you can end up with a "lot" of evidence and believe that it can't be coincidence and that therefore the claims in the holy book have merit. Start out with a thousand holy books and a thousand scientific theories on equal footing (zero evidence for each, prior based on Kolmogorov complexity) and then look at all relevant things we know about the world. There's no way a holy book is going to end up as the best explanation because holy books are just really bad at making explanations that lead to predictions that can be tested. A holy book has to make more and better accurate predictions than the equivalent scientific theories (to compensate for the unlikely prior) to come out on top after all evidence has been examined.

I don't think it's possible at all to internalize the sequences and still believe in a deity. I consider this almost a tautology because the sequences are basically about good thinking and about applying that good thinking in all domains of life. Religious thinking directly rejects the concept of rationality about religious topics. So if you decide to be rational in all things (not cold and unemotional; just rational) then religion just has to go.

Comment by Zed on Schroedinger's cat is always dead · 2011-08-28T02:15:56.125Z · LW · GW

I think that what you're saying is technically correct. However, simplifying the thought experiment by stating that the inside of the box can't interact with the outside world just makes the thought experiment easier to reason about and it has no bearing on the conclusions we can draw either way.

Comment by Zed on Schroedinger's cat is always dead · 2011-08-27T15:34:32.458Z · LW · GW

Yikes! Thanks for the warning.

Comment by Zed on Schroedinger's cat is always dead · 2011-08-27T11:29:21.592Z · LW · GW

Thanks for the additional info and explanation. I have some books about QM on my desk that I really ought to study in depth...

I should mention though that what you state about needing only a single-world is in direct contradiction to what EY asserts: "Whatever the correct theory is, it has to be a many-worlds theory as opposed to a single-world theory or else it has a special relativity violating, non-local, time-asymmetric, non-linear and non-measurepreserving collapse process which magically causes blobs of configuration space to instantly vanish [...] I don't see how one is permitted to hold out any hope whatsoever of getting the naive single world back."

My level of understanding is insufficient to debate QM on a serious level, but I'd be very interested in a high level exchange about QM here on LW. If you disagree with Eliezer's views on QM I think it is a good thing to say that explicitly, because when you study the different interpretations it's important to keep them apart (the subject is confusing[1] enough as is).

[1] a property of yours truly

Comment by Zed on Schroedinger's cat is always dead · 2011-08-27T10:04:49.476Z · LW · GW

The collapse of the wave function is, as far as I understand it, conjured up because the idea of a single world appeals to human intuition (even though there is no reason to believe the universe is supposed to make intuitive sense). My understanding is that regardless of the interpretation you put behind the quantum measurements you have to calculate as if there are multiple words (i.e. a subatomic particle can interfere with itself) and the collapse of the wave function is something you have to assume on top of that.

8 minute clip of EY talking with Scott Aaronson about Schrödinger's Cat

Comment by Zed on Schroedinger's cat is always dead · 2011-08-26T21:32:37.189Z · LW · GW

Yep, the box is supposed to be a completely sealed off environment so that the contents of the box (cat, cyanide, Geiger counter, vial, hammer, radioactive atoms, air for the cat breathe) cannot be affected by the outside world in any way. The box isn't a magical box, simply one that seals really well.

The stuff inside the box isn't special. So the particles can react with each other. The cat can breathe. The cat will die when exposed to the cyanide. The radioactive material can trigger the Geiger counter which triggers the hammer, which breaks the vial which releases the cyanide which causes the cat to die. Normal physics, but in a box.

Comment by Zed on Schroedinger's cat is always dead · 2011-08-26T20:48:43.227Z · LW · GW

Schrödinger's cat is a thought experiment. The cat is supposed to be real in the experiment. The experiment is supposed to be seen as silly.

People can reason through the math at the level of particles and logically there should be no reason why the same quantum logic wouldn't apply to larger systems. So if a bunch of particles can be entangled and if on observation (unrelated to consciousness) the wavefunction collapses (and thereby fully determines reality) then the same should be able to happen with a particle and a more complex system, such as a real live cat. After all, what is a cat except for a bunch of particles? This means the cat is literally both alive and dead until the superposition resolves.

The problem is that philosophers have sometimes abused this apparent paradox (both alive and dead!?) as some sort of Deep Mystery of quantum physics. It's not a deep mystery at all. It's just something that illustrates that if you take the Copenhagen interpretation literally then you have to bite the bullet and admit that a cat (or a human, etc) can be both alive and dead at the same time. Not just seemingly so, but actually so in reality. As that's the only thing that's consistent with the small scale quantum experiments. Schrödinger came up with this thought experiment because he realized the implications of the Copenhagen interpretation and concluded the implications were absurd.

If you're not willing to bite that bullet (and most quantum physicists nowadays aren't) then you have to look at other possibilities. For instance that the world splits and that in one world the cat is alive and in the other the cat is dead. In one world you'll observe the cat being alive and in the other world you observe the cat as dead. Both worlds are equally real and in both worlds you have the sensation of being in the only real world.

(I only have an elementary understanding of QM)

Comment by Zed on Weight training · 2011-08-26T15:43:48.147Z · LW · GW
  1. If you're starting out (read: don't yet know what you're doing) then optimize for not getting injured. If you haven't done any weight lifting then you'll get results even if you start out slowly.

  2. Optimize for likelihood of you not quitting. If you manage to stick to whatever plan you make you can always make adjustments where necessary. Risk of quitting is the #1 bottleneck.

  3. Personally, I think you shouldn't look for supplements until you feel you're reached a ceiling with regular workouts. Starting with a strict diet (measure everything) is a good idea if you're serious about this.

Comment by Zed on IntelligenceExplosion.com graphic redesign · 2011-08-26T15:23:41.192Z · LW · GW

Site looks great!

The first sentence is "Here you'll find scholarly material and popular overviews of intelligence explosion and its consequences." which parses badly for me and it isn't clear whether it's supposed to be a title (what this site is about) or just a single-sentence paragraph. I think leaving it out altogether is best.

I agree with the others that the mouse-chimp-Einstein illustration is unsuitable because it's unlikely to communicate clearly to the target audience. I went through the slides of the "The Challenge of Friendly AI" talk but I couldn't find a more suitable illustration.

Maybe the illustration can be "fixed" by simply replacing the Village Idiot and Einstein arrows by a single arrow labeled "human". Then the picture becomes a trivial depiction that humans are closer to chimps in intelligence than to post-intelligence explosion AIs. The more interesting insight (that Einstein and the village idiot have the same log-intelligence) will be gone but you can't really convey that in 4 seconds anyway.

Comment by Zed on Take heed, for it is a trap · 2011-08-26T15:03:00.991Z · LW · GW

Welcome to Less wrong!

This may be stating the obvious, but isn't this exactly the reason why there shouldn't be a subroutine that detects "The AI wants to cheat its masters" (or any similar security subroutines)?

The AI has to look out for humanity's interests (CEV) but the manner in which it does so we can safely leave up to the AI. Take for analogy Eliezer's chess computer example. We can't play chess as well as the chess computer (or we could beat Grand Masters of chess ourselves) but we can predict the outcome of the chess game when we play against the computer: the chess computer finds a winning position against us.

With a friendly AI you can't predict what it will do, or even why it will do it, but if we get FAI right then we can predict that the actions will steer humanity in the right direction.

(Also building an AI by giving it explicit axioms or values we desire is a really bad idea. Much like the genie in the lamp it is bound to turn out that we don't get what we think we asked for. See http://singinst.org/upload/CEV.html if you haven't read it already)

Comment by Zed on Take heed, for it is a trap · 2011-08-26T14:48:57.267Z · LW · GW

Sure, unanimous acceptance of the ideas would be worrying sign. Would it be a bad sign if we were 98% in agreement about everything discussed in the sequences? I think that depends on whether you believe that intelligent people when exposed to the same arguments and the same evidence should reach the same conclusion (Aumann's agreement theorem). I think that disagreement is in practice a combination of (a) bad communication (b) misunderstanding of the subject material by one of the parties (c) poor understanding of the philosophy of science (d) emotions/signaling/dissonance/etc.

I think it's just really difficult to have a fundamental disagreement that isn't founded on some sort of personal value. Most disagreements can be rephrased in terms of an experiment where both parties will confidently claim the experiment will have different outcomes. By the time such an experiment has been identified the disagreement has dissolved.

Discussion is to be expected because discussions are beneficial for organizing one's thoughts and because most of us like to discuss the subject material on LW. Persistent disagreement I see mainly as a result of insufficient scholarship.

Comment by Zed on Take heed, for it is a trap · 2011-08-17T22:59:33.758Z · LW · GW

Thanks for the explanation, that helped a lot. I expected you to answer 0.5 in the second scenario, and I thought your model was that total ignorance "contaminated" the model such that something + ignorance = ignorance. Now I see this is not what you meant. Instead it's that something + ignorance = something. And then likewise something + ignorance + ignorance = something according to your model.

The problem with your model is that it clashes with my intuition (I can't find fault with your arguments). I describe one such scenario here.

My intuition is that the probability of these two statements should not be the same:

A. "In order for us to succeed one of 12 things need to happen"

B. "In order for us to succeed all of these 12 things need to happen"

In one case we're talking about a disjunction of 12 unknowns and in the second scenario we're talking about a conjunction. Even if some of the "things" are not completely uncorrelated that shouldn't affect the total estimate that much. My intuition is that saying P(A) = 1 - 0.5 ^ 12 and P(B) = 0.5 ^ 12. Worlds apart! As far as I can tell you would say that in both cases the best estimate we can make is 0.5. I introduce the assumption of independence (I don't stipulate it) to fix this problem. Otherwise the math would lead me down a path that contradicts common sense.

Comment by Zed on Take heed, for it is a trap · 2011-08-17T21:29:52.602Z · LW · GW

I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you're no longer dealing with a state of complete ignorance. You're in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you're completely ignorant about its contents (by stipulation) but once you open it that's no longer the case.

The answer for all three envelopes is, in the case of complete ignorance, 0.5.

So the probability that all three envelopes happen to contain a true hypothesis/proposition is 0.125 based on the assumption of independence. Since you said "mostly independent" does that mean you think we're not allowed to assume complete independence? If the answer isn't 0.125, what is it?

edit:

If your answer to the above is "still 0.5" then I have another scenario. You're in total ignorance of A. B denotes the probability of rolling a a 6 on a regular die. What's the probability that A & B are true? I'd say it has to be 1/12, even though it's possible that A and B are not independent.

Comment by Zed on Take heed, for it is a trap · 2011-08-17T13:35:44.291Z · LW · GW

It's purely a formality

I disagree with this bit. It's only purely a formality when you consider a single hypothesis, but when you consider a hypothesis that is comprised of several parts, each of which uses the prior of total ignorance, then the 0.5 prior probability shows up in the real math (that in turn affects the decisions you make).

I describe an example of this here: http://lesswrong.com/r/discussion/lw/73g/take_heed_for_it_is_a_trap/4nl8?context=1#4nl8

If you think that the concept of the universal prior of total ignorance is purely a formality, i.e. something that can never affect the decisions you make, then I'd be very interested in your thoughts behind that.

Comment by Zed on Take heed, for it is a trap · 2011-08-17T10:46:07.020Z · LW · GW

In your example before we have any information we'd assume P(A) = 0.5 and after we have information about the alphabet and how X is constructed from the alphabet we can just calculate the exact value for P(A|B). So the "update" here just consists of replacing the initial estimate with the correct answer. I think this is also what you're saying so I agree that in situations like these using P(A) = 0.5 as starting point does not affect the final answer (but I'd still start out with a prior of 0.5).

I'll propose a different example. It's a bit contrived (well, really contrived, but OK).

Frank and his buddies (of which you are one) decide to rob a bank.

Frank goes: "Alright men, in order for us to pull this off 4 things have to go perfectly according to plan."

(you think: conjunction of 4 things: 0.0625 prior probability of success)

Frank continues: the first thing we need to do is beat the security system (... long explanation follows).

(you think: that plan is genius and almost certain to work (0.9 probability of success follows from Bayesian estimate). I'm updating my confidence to 0.1125)

Frank continues: the second thing we we need to do is break into the safe (... again a long explanation follows).

(you think: wow, that's a clever solution - 0.7 probability of success. Total probability of success 0.1575)

Frank continues: So! Are you in or are you out?

At this point you have to decide immediately. You don't have the time to work out the plausibility of the remaining two factors, you just have to make a decision. But just by knowing that there are two more things that have to go right you can confidently say "Sorry Frank, but I'm out.".

If you had more time to think you could come up with a better estimate of success. But you don't have time. You have to go with your prior of total ignorance for the last two factors of your estimate.

If we were to plot the confidence over time I think it should start at 0.5, then go to 0.0625 when we understand a estimate of a conjunction of 4 parts is to be calculated and after that more nuanced Bayesian reasoning follows. So if I were to build an AI then I would make it start out with the universal prior of total ignorance and go from there. So I don't think the prior is a purely mathematical trick that has no bearing on we way we reason.

(At the risk of stating the obvious: you're strictly speaking never adjusting based on the prior of 0.5. The moment you have evidence you replace the prior with the estimate based on evidence. When you get more evidence you can update based on that. The prior of 0.5 completely vaporizes the moment evidence enters the picture. Otherwise you would be doing an update on non-evidence.)

Comment by Zed on Take heed, for it is a trap · 2011-08-17T09:55:29.635Z · LW · GW

I agree with everything you said (including the grandparent). Some of the examples you named are primarily difficult because of the ugh-field and not because of inferential distance, though.

One of the problems is that it's strictly more difficult to explain something than to understand it. To understand something you can just go through the literature at your own pace, look up everything you're not certain about, and so continue studying until all your questions are answered. When you want to explain something you have to understand it but you also have to be able to figure out the right words to bridge the inferential gap, you have to figure out where the other person's model differs from yours and so on.

So there will always be a set of problems you understand well enough to be confident they're true but not well enough to explain them to others.

Anthropomorphic global warming is a belief that falls into this category for most of us. It's easy to follow the arguments and to look at the data and conclude that yes, it's humans that are the cause of global warming. But to argue for it successfully? Nearly impossible (unless you have studied the subject for years).

Cryonics is also a topic that's notoriously difficult to discuss. If you can argue for that effectively my hat's off to you. (Argue for it effectively => they sign up)

Comment by Zed on Take heed, for it is a trap · 2011-08-17T09:13:54.063Z · LW · GW

Finally, on an empirical level, it seems like there are more false n-bit statements than true n-bit statements.

I'm pretty certain this intuition is false. It feels true because it's much harder to come up with a true statement from N bits if you restrict yourself to positive claims about reality. If you get random statements like "the frooble fuzzes violently" they're bound to be false, right? But for every nonsensical or false statement you also get the negation of a nonsensical or false statement. "not( the frooble fuzzes violiently)". It's hard to arrive at a statement like "Obama is the 44th president" and be correct, but it's very easy to enumerate a million things that do not orbit Pluto (and be correct).

(FYI: somewhere below there is a different discussion about whether there are more n-bit statements about reality that are false than true)

Comment by Zed on Take heed, for it is a trap · 2011-08-17T09:10:31.133Z · LW · GW

[ replied to the wrong person ]

Comment by Zed on Take heed, for it is a trap · 2011-08-16T10:09:41.477Z · LW · GW

Legend:

S -> statements
P -> propositions
N -> non-propositional statements
T -> true propositions
F -> false propositions

I don't agree with condition S = ~T + T.

Because ~T + T is what you would call the set of (true and false) propositions, and I have readily accepted the existence of statements which are neither true nor false. That's N. So you get S = ~T + T + N = T + F + N = P + N

We can just taboo proposition and statement as proposed by komponisto. If you agree with the way he phrased it in terms of hypothesis then we're also in agreement (by transitivity of agreement :)

(This may be redundant, but if your point is that the set of non-true statements is larger than the set of false propositions, then yes, of course, I agree with that. I still don't think the distinction between statement and proposition is that relevant to the underlying point because the odds ratio is not affected by the inclusion or exclusion of non-propositional statements)

Comment by Zed on Take heed, for it is a trap · 2011-08-16T09:17:45.792Z · LW · GW

As I see it, statements start with some probability of being true propositions, some probability of being false propositions, and some probability of being neither.

Okay. So "a statement, any statement, is as likely to be true as false (under total ignorance)" would be more accurate. The odds ratio remains the same.

The intuition that statements fail to be true most of the time is wrong, however. Because, trivially, for every statement that is true its negation is false and for every statement that is false its negation is true. (Statements that have no negation are neither true nor false)

It's just that (interesting) statements in practice tend to be positive claims (about the world), and it's much harder to make a true positive claim about the world than a true negative one. This is why a long (measured in Kolmogorov complexity) positive claim is very unlikely to be true and a long negative claim (Kolmogorov complexity) is very likely to be true. Also, it's why a long conjunction of terms is unlikely to be true and a long disjunction of terms is likely to be true. Again, symmetry.

Comment by Zed on Take heed, for it is a trap · 2011-08-16T07:59:13.608Z · LW · GW

I assume that people in their pre-bayesian days aren't even aware of the existence of the sequences so I don't think they can use that to calculate their estimate. What I meant to get at is that it's easy to be really certain a belief is false if it it's intuitively wrong (but not wrong in reality) and the inferential distance is large. I think it's a general bias that people are disproportionately certain about beliefs at large inferential distances, but I don't think that bias has a name.

(Not to mention that people are really bad at estimating inferential distance in the first place!)

Comment by Zed on Take heed, for it is a trap · 2011-08-15T23:14:13.900Z · LW · GW

Aspergers and anti-social tendencies are, as far as I can tell, highly correlated with low social status. I agree with you that the test also selects for people who are good at the sciences and engineering. Unfortunately scientists and engineers also have low social status in western society.

First Xachariah suggested I may have misunderstood signaling theory. Then Incorrect said that what I said would be correct assuming LessWrong readers have low status. Then I replied with evidence that I think supports that position. You probably interpreted what I said in a different context.

Comment by Zed on Take heed, for it is a trap · 2011-08-15T22:24:08.624Z · LW · GW

I think you were too convinced I was wrong in your previous message for this to be true. I think you didn't even consider the possibility that complexity of a statement constitutes evidence and that you had never heard the phrasing before. (Admittedly, I should have used the words "total ignorance", but still)

Your previous post strikes me as a knee-jerk reaction. "Well, that's obviously wrong". Not as an attempt to seriously consider under which circumstances the statement could be true. You also incorrectly claimed I was an ignoramus rationalist (for which you didn't apologize) which only provides further evidence you didn't really think before you started writing your critique (because who seriously considers the opinions of an ignoramus?).

And now, instead of just saying "Oops" you shift the goalpost from "false no matter what way you look at it" to something fuzzy where we're simply talking about different things.

This is blatant intellectual dishonesty.

Comment by Zed on Take heed, for it is a trap · 2011-08-15T22:13:00.128Z · LW · GW

I chose the wording carefully, because "I want people to cut off my head" is funny, and the more general or more correct phrasing is not. But now that it has been thoroughly dissected...

Anyway, since you asked twice I'm going to change way the first statement is phrased. I don't feel that strongly about it and if you find it grating I'm also happy to change it to any other phrasing of your choosing.

I'm sorry if I contributed to an environment in which ideas are too criticized

I interpret your first post as motivated on a need to voice your disagreement, not motivated based on the expected utility of the post for the community. I'm sometimes guilty of this because sometimes it seems almost criminal to not point out that something is wrong when it is in fact wrong.

As a general rule, disagreements voiced in a single sentence "This is false because of X" or "No, this contradicts your second paragraph" come across pretty aggressively. In my experience only very few people respond well to disagreements voiced in that manner. You've also accused me of fallacious reasoning twice even though there was no good reason to do so (because more charitable interpretations of what I said are not fallacious).

Comment by Zed on Take heed, for it is a trap · 2011-08-14T19:48:50.908Z · LW · GW

In that case it's clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we're saved by symmetry.

There are a number of different combinations of A and ~A and B and ~B but I think that their conditional "biases" all cancel each other out. We just don't know if we're dealing with A or with ~A, with B or with ~B. If for every bias there is an equal and opposite bias, to paraphrase Newton, then I think the independence assumption must hold.

Suppose you are handed three closed envelopes each containing a concealed proposition. Without any additional information I think we have no choice but to assign each unknown proposition probability 0.5. If you then open the third envelope and if it reads "envelope-A & envelope-B" then the probability of that proposition changes to 0.25 and the other two stay at 0.5.

If not 0.25, then which number do you think is correct?