Posts

Self-skepticism: the first principle of rationality 2012-08-06T00:51:32.688Z
What are the optimal biases to overcome? 2012-08-04T15:04:14.699Z
A cynical explanation for why rationalists worry about FAI 2012-08-04T12:27:54.454Z

Comments

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-06T19:34:20.311Z · LW · GW

I guess you need to do some more thinking to straighten out your views on qualia.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T23:28:17.531Z · LW · GW

Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight -- it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.

The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.

By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren't made out of neurons any more than red dots are made of flashlights.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T22:23:39.994Z · LW · GW

Because the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T21:46:46.330Z · LW · GW

They're not assumptions, they're the answers to questions that have the highest probability going for them given the evidence.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T21:45:19.795Z · LW · GW

Of course not!

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T21:45:02.382Z · LW · GW

Who said anything about our intuitions (except you, of course)?

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-05T21:36:42.391Z · LW · GW

I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don't see how point one holds (we experience it), and the argument obviously doesn't go through.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-04T22:10:16.942Z · LW · GW

Because it's the only thing in the universe we've found with a first-person ontology. How else do you explain it?

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-04T22:08:32.250Z · LW · GW

Well, let's be clear: the argument I laid out is trying to refute the claim that "I can create a human-level consciousness with a Turing machine". It doesn't mean you couldn't create an AI using something other than a pure Turing machine and it doesn't mean Turing machines can't do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn't going to keep you alive.

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?

And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-04T22:01:47.741Z · LW · GW

I guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like "money" and "human rights"; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it's properties of minds, not particles, there's still a lot of work left to do.)

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2013-01-04T21:51:39.840Z · LW · GW

Beginning an argument for the existence of qualia with a bare assertion that they exist

Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?

I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia?

I don't understand why you think this is a claim about my feelings.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2012-12-25T21:46:41.861Z · LW · GW

I was talking about Searle's non-AI work, but since you brought it up, Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2012-12-25T21:27:36.483Z · LW · GW

I guess I must have misunderstood something somewhere along the way, since I don't see where in this sequence you provide "constructive accounts of how to build meaningful thoughts out of 'merely' effective constituents" . Indeed, you explicitly say "For a statement to be ... true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links." This strikes me as parallel to Searle's view that consciousness imposes meaning.

But, more generally, Searle says his life's work is to explain how things like "money" and "human rights" can exist in "a world consisting entirely of physical particles in fields of force"; this strikes me as akin to your Great Reductionist Project.

Comment by aaronsw on Mixed Reference: The Great Reductionist Project · 2012-12-25T20:57:42.907Z · LW · GW

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.

EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.

Comment by aaronsw on Open Thread, December 1-15, 2012 · 2012-12-01T14:40:54.457Z · LW · GW

That's a good explanation of how to do Solomonoff Induction, but it doesn't really explain why. Why is a Kolmgorov complexity prior better than any other prior?

Comment by aaronsw on Open Thread, December 1-15, 2012 · 2012-12-01T14:36:57.598Z · LW · GW

I agree with EY that collapse interpretations of QM are ridiculous but are there any arguments against the Bohm interpretation better than the ones canvassed in the SEP article?

http://plato.stanford.edu/entries/qm-bohm/#o

Comment by aaronsw on Open Thread, December 1-15, 2012 · 2012-12-01T14:18:40.922Z · LW · GW

Someone smart recently argued that there's no empirical evidence young earth creationists are wrong because all the evidence we have of the Earth's age is consistent either hypothesis that God created the earth 4000 years ago but designed it to look like it was much older. Is there a good one-page explanation of the core LessWrong idea that your beliefs need to be shifted by evidence even when the evidence isn't dispositive as versus the standard scientific notion of devastating proof? Right now the idea seems smeared across the Sequences.

Comment by aaronsw on Philosophy Needs to Trust Your Rationality Even Though It Shouldn't · 2012-12-01T14:12:32.345Z · LW · GW

Typo: But or many philosophical problems

Comment by aaronsw on Intuitions Aren't Shared That Way · 2012-12-01T14:02:56.234Z · LW · GW

You might enjoy http://www.aaronsw.com/weblog/moralbiases

Comment by aaronsw on Causal Universes · 2012-11-29T23:40:30.316Z · LW · GW

I don't totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality ("In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model."). Fredkin and Wolfram probably also have similar discussions.

Comment by aaronsw on [LINK] "Prediction Audits" for Nate Silver, Dave Weigel · 2012-11-09T01:45:41.868Z · LW · GW

Why doesn't Jackman get a Brier score? He claims it's .00991: http://jackman.stanford.edu/blog/?p=2602

Comment by aaronsw on [LINK] "Prediction Audits" for Nate Silver, Dave Weigel · 2012-11-09T01:44:25.915Z · LW · GW

Apparently a team at Penn is doing this as well:

http://jackman.stanford.edu/blog/?p=2602

Comment by aaronsw on The Fabric of Real Things · 2012-10-12T12:37:39.237Z · LW · GW

Wolfram 2002 argues that spacetime may actually be a discrete causal network and writes:

The idea that space might be defined by some sort of causal network of discrete elementary quantum events arose in various forms in work by Carl von Weizsäcker (ur-theory), John Wheeler (pregeometry), David Finkelstein (spacetime code), David Bohm (topochronology) and Roger Penrose (spin networks; see page 1055).

Later, in 10.9, he discusses using graphical causal models to fit observed data using Bayes' rule. I don't know if he ever connects the two points, though.

Comment by aaronsw on Skill: The Map is Not the Territory · 2012-10-05T20:27:40.762Z · LW · GW

Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.

Comment by aaronsw on Where to Intervene in a Human? · 2012-10-02T13:53:22.691Z · LW · GW
  • Doing hacker exercises every morning
  • Taking a cold shower every morning
  • Putting on pants
  • Lying flat on my back and closing my eyes until I consciously process all the things that are nagging at me at begin to feel more focused
  • Asking someone to coach me through getting started on something
  • Telling myself that doing something I don't want to do will make me stronger
  • Squeezing a hand grip exerciser for as long as I can (inspired by Muraven 2010; mixed results with this one)

You?

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-19T13:14:56.453Z · LW · GW

It's been two weeks. Can you post it now?

Comment by aaronsw on Cult impressions of Less Wrong/Singularity Institute · 2012-08-18T19:14:06.087Z · LW · GW

Has anyone seriously suggested you invented MWI? That possibility never even occurred to me.

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-08T16:31:16.932Z · LW · GW

The main insight of the book is very simple to state. However, the insight was so fundamental that it required me to update a great number of other beliefs I had, so I found being able to read a book's worth of examples of it being applied over and over again was helpful and enjoyable. YMMV.

Comment by aaronsw on Self-skepticism: the first principle of rationality · 2012-08-06T22:57:42.892Z · LW · GW

Unlike, say, wedrifid, whose highly-rated comment was just full of facts!

Comment by aaronsw on Cult impressions of Less Wrong/Singularity Institute · 2012-08-06T22:43:02.895Z · LW · GW

It seems a bit bizarre to say I've dismissed LessWrong given how much time I've spent here lately.

Comment by aaronsw on Self-skepticism: the first principle of rationality · 2012-08-06T11:57:34.489Z · LW · GW

FWIW, I don't think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.

Comment by aaronsw on Self-skepticism: the first principle of rationality · 2012-08-06T11:54:42.783Z · LW · GW

You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.

No, I'd love another example to use so that people don't have this kind of emotional reaction. Please suggest one if you have one.

UPDATE: I thought of a better example on the train today and changed it.

Comment by aaronsw on Cult impressions of Less Wrong/Singularity Institute · 2012-08-06T11:50:08.189Z · LW · GW

Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?

Well, for example, if EY is so confident that he's proven "MWI is obviously true - a proposition far simpler than the argument for supporting SIAI", he should try presenting his argument to some skeptical physicists. Instead, it appears the physicists who have happened to run across his argument found it severely flawed.

How rational is it to think that you've found a proof most physicists are wrong and then never run it by any physicists to see if you're right?

My read on your comment is: LWers don't act humble, therefore they are crackpots.

I do not believe that.

As for why SI's approach is dangerous, I think Holden put it well in the most upvoted post on the site.

I'm not trying to be inflammatory, I just find it striking.

Comment by aaronsw on Cult impressions of Less Wrong/Singularity Institute · 2012-08-05T23:22:53.355Z · LW · GW

I think the biggest reason Less Wrong seems like a cult is because there's very little self-skepticism; people seem remarkably confident that their idiosyncratic views must be correct (if the rest of the world disagrees, that's just because they're all dumb). There's very little attempt to provide any "outside" evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).

Instead, when someone points this out, Eliezer fumes "do you know what pluralistic ignorance is, and Asch's conformity experiment? ... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong".

What's especially amusing is that EY is able to keep this stuff up by systematically ignoring every bit of his own advice: telling people to take the outside view and then taking the inside one, telling people to look into the dark while he studiously avoids it, emphasizing the importance of AI safety while he embarks on an extremely dangerous way of building AI -- you can do this with pretty much every entry in the sequences.

These are the sorts of things that make me think LessWrong is most interesting as a study in psychoceramics.

Comment by aaronsw on "Epiphany addiction" · 2012-08-05T12:42:03.355Z · LW · GW

Yvain's argument was that "x-rationality" (roughly the sort of thing that's taught in the Sequences) isn't practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can't have big effects because the world is too noisy, just seems like another excuse for avoiding reality.

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-05T11:43:04.409Z · LW · GW

I really enjoyed The Seven Habits of Highly Effective People. (By contrast, I tried reading some @pjeby stuff yesterday and it had all the problems you describe cranked up to 11 and I found it incredibly difficult to keep reading.)

I don't think the selection bias thing would be a problem if the community was focused on high-priority instrumental rationality techniques, since at any level of effectiveness becoming more effective should be a reasonably high priority. (By contrast, if the community is focused on low-priority techniques it's not that big a deal (that was my attitude toward OvercomingBias at the beginning) and when it gets focused on stuff like cryo/MWI/FAI I find that an active turnoff.)

I think there's a decent chance epistemic rationality, ceteris paribus, makes you less likely to be traditionally successful My general impression from talking to very successful people is that very few of them are any good at figuring out what's true; indeed, they often seem to have set up elaborate defense mechanisms to make sure no one accidentally tells them the truth.

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-04T22:17:57.254Z · LW · GW

Carol Dweck's Mindset. While unfortunately it has the cover of a self-help book, it's actually a summary of some fascinating psychology research which shows that a certain way of conceptualizing self-improvement tends to be unusually effective at it.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T22:02:11.870Z · LW · GW

My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T15:07:14.317Z · LW · GW

Two people have been confused by the "arguing about ideas" phrase, so I changed it to "thinking about ideas".

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-04T14:38:23.151Z · LW · GW

lukeprog's writings, especially Build Small Skills in the Right Order.

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-04T14:34:19.177Z · LW · GW

Ray Dalio's "Principles". There's a bunch of stuff in there that I disagree with, but overall he seems pretty serious about tackling these issues -- and apparently has been very successful.

Comment by aaronsw on What are the optimal biases to overcome? · 2012-08-04T14:33:58.125Z · LW · GW

Use direct replies to this comment for suggesting things about tackling practical biases.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T14:13:51.742Z · LW · GW

Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is "keep NASA at current funding levels and increase funding for nuclear weapons research" then you should be very suspicious.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T14:11:01.256Z · LW · GW

Can you point to something I said that's you think is wrong?

My understanding of the history (from reading an interview with Eliezer) is that Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality. But whether that's true or not, I don't see how that's inconsistent with the notion that Eliezer and a bunch of people similar to him are suffering from motivated reasoning.

I also don't see how I conflated LW and SI. I said many LW readers worry about UFAI and that SI has taken the position that the best way to address this worry is to do philosophy.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T14:08:21.570Z · LW · GW

Right. I tweaked the sentence to make this more clear.

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T14:05:43.732Z · LW · GW

Yes, "arguing about ideas on the Internet" is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).

Comment by aaronsw on A cynical explanation for why rationalists worry about FAI · 2012-08-04T14:04:34.091Z · LW · GW

There's nothing wrong with arguing on the Internet. I'm merely asking whether the belief that "arguing on the Internet is the most important thing anyone can do to help people" is the result of motivated reasoning.

Comment by aaronsw on Reply to Holden on The Singularity Institute · 2012-08-04T11:18:27.475Z · LW · GW

On the question of the impact of rationality, my guess is that:

  1. Luke, Holden, and most psychologists agree that rationality means something roughly like the ability to make optimal decisions given evidence and goals.

  2. The main strand of rationality research followed by both psychologists and LWers has been focused on fairly obvious cognitive biases. (For short, let's call these "cognitive biases".)

  3. Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational. For example, it's very clear that spinning a wheel should not affect people's estimates of how many African countries are in the UN. But do you know anyone for whom this sort of thing is really their biggest problem?

  4. Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them. These are the tests used in the studies psychologists have done on whether rationality predicts success.

  5. LW readers tend to be fairly good at avoiding cognitive biases (and will be even better if CFAR takes off).

  6. But there are a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases", even though both are ultimately practical and cognitive.)

  7. Holden is unusually good at avoiding these sorts of practical biases. (I've found Ray Dalio's "Principles", written by Holden's former employer, an interesting document on practical biases, although it also has a lot of stuff I disagree with or find silly.)

  8. Holden's superiority at avoiding practical biases is a big part of why GiveWell has tended to be more successful than SIAI. (Givewell.org has around 30x the amount of traffic as Singularity.org according to Compete.com and my impression is that it moves several times as much money, although I can't find a 2011 fundraising total for SIAI.)

  9. lukeprog has been better at avoiding practical biases than previous SIAI leadership and this is a big part of why SIAI is improving. (See, e.g., lukeprog's debate with EY about simply reading Nonprofit Kit for Dummies.)

  10. Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.

Comment by aaronsw on Reply to Holden on 'Tool AI' · 2012-08-04T10:37:44.127Z · LW · GW

Then it does seem like your AI arguments are playing reference class tennis with a reference class of "conscious beings". For me, the force of the Tool AI argument is that there's no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden's Tool AI: you'd feed it data, it'd make predictions, you could choose to use the predictions.

(This is, naturally, the view of that school of AI implementers. Scott Brown: "People often seem to conflate having intelligence with having volition. Intelligence without volition is just information.")

Comment by aaronsw on Many Worlds, One Best Guess · 2012-08-04T10:23:34.653Z · LW · GW

"it will have conscious observers in it if it performs computations"

So your argument against Bohm depends on information functionalism?