[LINK] Scott Aaronson on Integrated Information Theory

post by DanielFilan · 2014-05-22T08:40:40.065Z · LW · GW · Legacy · 11 comments

Contents

11 comments

Scott Aaronson, complexity theory researcher, disputes Tononi's theory of consciousness, that a physical system is conscious if and only if it has a high value of "integrated information". Quote:

So, this is the post that I promised to Max [Tegmark] and all the others, about why I don’t believe IIT.  And yes, it will contain that quantitative calculation [of the integrated information of a system that he claims is not conscious].

...

But let me end on a positive note.  In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed.  Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

http://www.scottaaronson.com/blog/?p=1799

11 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2014-05-22T16:55:55.190Z · LW(p) · GW(p)

Here is my summary of his post and some related thoughts.

Scott instrumentalizes Chalmers' vague Hard problem of consciousness:

the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes

into something concrete and measurable, which he dubs the Pretty-Hard Problem of Consciousness:

a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict

and shows that Tononi's IIT fails to solve the latter. He does it by constructing a counterexample which has arbitrarily high integrated information (more than a human brain) while doing nothing anyone would call conscious. He also notes that building a theory of consciousness around information integration is not a promising approach in general:

As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.

Scott is very good at instrumentalizing vague ideas (what lukeprog calls hacking away at the edges). He did the same for the notion of "free will" in his paper The Ghost in the Quantum Turing Machine. His previous blog entry was about "The NEW Ten Most Annoying Questions in Quantum Computing", which are some of the "edges" to hack at when thinking about the "deep" and "hard" problems of Quantum Computing. This approach has been very successful in the past:

of the nine questions, six have by now been completely settled

after 8 years of work.

I hope that there are people at MIRI who are similarly good at instrumentalizing big ideas into interesting yet solvable questions.

comment by DanielVarga · 2014-05-31T10:15:36.590Z · LW(p) · GW(p)

Tononi gives a very interesting (weird?) reply: Why Scott should stare at a blank wall and reconsider (or, the conscious grid), where he accepts the very unintuitive conclusion that an empty square grid is conscious according to his theory. (Scott's phrasing: "[Tononi] doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.") Here is Scott's reply to the reply:

Giulio Tononi and Me: A Phi-nal Exchange

comment by Toggle · 2014-07-11T15:19:02.185Z · LW(p) · GW(p)

Here's one particularly weird consequence of IIT: a zeroed-out system has the same degree of consciousness as a dynamic one, because it's a structural measure of a system. For example, a physical, memrister based neural net has the same degree of integrated information when it's unplugged. Or, to chase after a more absurd-seeming conclusion, human consciousness is not reduced immediately upon death (assuming no brain damage), instead slowly decreasing as the cellular arrangement begins to decay.

Given that, I agree with Scott- while interesting, IIT doesn't track particularly well with 'consciousness' in the conceptual sense.

comment by Transfuturist · 2014-05-22T19:35:20.576Z · LW(p) · GW(p)

His use of philosophical zombies does not dissuade me.

The idea of devices that transform input data with "low-density parity-checks" having more phi than humans concerns me slightly more. If it is a valid complaint, then I believe it's probably an issue with the formalism, not with the concept.

I need to read further.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-23T08:06:57.316Z · LW(p) · GW(p)

I didn't downvote, but I'm guessing it's because you stated your opinions about the post without giving reasons for believing in those opinions.

Replies from: Transfuturist
comment by Transfuturist · 2014-05-23T20:21:55.704Z · LW(p) · GW(p)

I didn't think I had to cite my sources on philosophical zombies; we're on LessWrong.

And the downvoting continues. Would the individual in question say something?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-24T13:43:44.354Z · LW(p) · GW(p)

There was also this:

If it is a valid complaint, then I believe it's probably an issue with the formalism, not with the concept.

Replies from: Transfuturist
comment by Transfuturist · 2014-05-24T18:21:48.018Z · LW(p) · GW(p)

What's wrong with that? I'd say it's a prevalent problem when trying to formalize complicated concepts.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-25T05:34:18.581Z · LW(p) · GW(p)

Like I said in my original comment, it's stating your opinion without giving any reason to believe in that opinion. If you don't say why you believe that it's an issue with the formalism rather than the concept, you're adding more noise than information. Facts are better than opinions.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-05-25T20:38:41.404Z · LW(p) · GW(p)

Take that, Aumann!

Replies from: Kawoomba
comment by Kawoomba · 2014-05-25T21:07:48.607Z · LW(p) · GW(p)

Take that, Aumann!

His answer: "Au, Mann!" ("au" means "ouch" in German, his original mother tongue). Aw man, bad puns are my personal demon (works phonetically). Amen to that being a bad case of nomen est omen.

Aumann must be rolling in his grave from disagreeing with all the misuses of his agreement theorem as applying in a social context. Figure of speech, since he's still alive.

ETA: Au-mann puns, the poor man's gold!