AI prediction case study 3: Searle's Chinese room

post by Stuart_Armstrong · 2013-03-13T12:44:38.095Z · score: 7 (10 votes) · LW · GW · Legacy · 36 comments

Contents

  Locked up in Searle's Chinese room
  References:
None
36 comments

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligence conference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

Locked up in Searle's Chinese room

Searle's Chinese room thought experiment is a famous critique of some of the assumptions of 'strong AI' (which Searle defines as the belief that 'the appropriately programmed computer literally has cognitive states). There has been a lot of further discussion on the subject (see for instance (Sea90,Har01)), but, as in previous case studies, this section will focus exclusively on his original 1980 publication (Sea80).

In the key thought experiment, Searle imagined that AI research had progressed to the point where a computer program had been created that could demonstrate the same input-output performance as a human - for instance, it could pass the Turing test. Nevertheless, Searle argued, this program would not demonstrate true understanding. He supposed that the program's inputs and outputs were in Chinese, a language Searle couldn't understand. Instead of a standard computer program, the required instructions were given on paper, and Searle himself was locked in a room somewhere, slavishly following the instructions and therefore causing the same input-output behaviour as the AI. Since it was functionally equivalent to the AI, the setup should, from the 'strong AI' perspective, demonstrate understanding if and only if the AI did. Searle then argued that there would be no understanding at all: he himself couldn't understand Chinese, and there was no-one else in the room to understand it either.

The whole argument depends on strong appeals to intuition (indeed D. Dennet went as far as accusing it of being an 'intuition pump' (Den91)). The required assumptions are:

Thus the Chinese room argument is unconvincing to those that don't share Searle's intuitions. It cannot be accepted solely on Searle's philosophical expertise, as other philosophers disagree (Den91,Rey86). On top of this, Searle is very clear that his thought experiment doesn't put any limits on the performance of AIs (he argues that even a computer with all the behaviours of a human being would not demonstrate true understanding). Hence the Chinese room seems to be useless for AI predictions. Can useful prediction nevertheless be extracted from it?

These need not come directly from the main thought experiment, but from some of the intuitions and arguments surrounding it. Searle's paper presents several interesting arguments, and it is interesting to note that many of them are disconnected from his main point. For instance, errors made in 1980 AI research should be irrelevant to the Chinese Room - a pure thought experiment. Yet Searle argues about these errors, and there is at least an intuitive if not a logical connection to his main point. There are actually several different arguments in Searle's paper, not clearly divided from each other, and likely to be rejected or embraced depending on the degree of overlap with Searle's intuitions. This may explain why many philosophers have found Searle's paper so complex to grapple with.

One feature Searle highlights is the syntactic-semantic gap. If he is correct, and such a gap exists, this demonstrates the possibility of further philosophical progress in the area (in the opinion of one of the authors, the gap can be explained by positing that humans are purely syntactic beings, but that have been selected by evolution such that human mental symbols correspond with real world objects and concepts - one possible explanation among very many). For instance, Searle directly criticises McCarthy's contention that ''Machines as simple as thermostats can have beliefs'' (McC79). If one accepted Searle's intuition there, one could then ask whether more complicated machines could have beliefs, and what attributes they would need. These should be attributes that it would be useful to have in an AI. Thus progress in 'understanding understanding' would likely make it easier to go about designing AI - but only if Searle's intuition is correct that AI designers do not currently grasp these concepts.

That can be expanded into a more general point. In Searle's time, the dominant AI paradigm was GOFAI (Good Old-Fashioned Artificial Intelligence (Hau85)), which focused on logic and symbolic manipulation. Many of these symbols had suggestive labels: SHRDLU, for instance, had a vocabulary that included 'red', 'block', 'big' and 'pick up' (Win71). Searle's argument can be read, in part, as a claim that these suggestive labels did not in themselves impart true understanding of the concepts involved - SHRDLU could parse ''pick up a big red block'' and respond with an action that seems appropriate, but could not understand those concepts in a more general environment. The decline of GOFAI since the 1980's cannot be claimed as vindication of Searle's approach, but it at least backs up his intuition that these early AI designers were missing something.

Another falsifiable prediction can be extracted, not from the article but from the intuitions supporting it. If formal machines do not demonstrate understanding, but brains (or brain-like structures) do, this would lead to certain scenario predictions. Suppose two teams were competing to complete an AI that will pass the Turing test. One team was using standard programming techniques on computer, the other were building it out of brain (or brain-like) components. Apart from this, there is no reason to prefer one team over the other.

According to Searle's intuition, any AI made by the first project will not demonstrate true understanding, while those of the second project may. Adding the reasonable assumption that it is harder to simulate understanding if one doesn't actually possess it, one is lead to the prediction that the second team is more likely to succeed.

Thus there are three predictions that can be extracted from the Chinese room paper:

  1. Philosophical progress in understanding the syntactic-semantic gap may help towards designing better AIs.
  2. GOFAI's proponents incorrectly misattribute understanding and other high level concepts to simple symbolic manipulation machines, and will not succeed with their approach.
  3. An AI project that uses brain-like components is more likely to succeed (everything else being equal) than one based on copying the functional properties of the mind.

Therefore one can often extract predictions from even the most explicitly anti-predictive philosophy of AI paper.

 

References:

 

36 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2013-03-13T16:00:39.650Z · score: 10 (10 votes) · LW · GW

SHRDLU could parse ''pick up a big red block'' and respond with an action that seems appropriate, but could not understand those concepts in a more general environment.

Could the same thing be also true for humans, only that we have much larger environment where we can function properly?

When we discuss many-worlds interpretation, the meaning of probability, whether atoms have identity, et cetera... a superintelligence could observe our attempts and say: "Humans have some symbols hardcoded by evolution, they can also do some simple reasoning about them, but when they get too far away from their natural environment, they are completely lost."

comment by Izeinwinter · 2013-03-14T17:02:06.544Z · score: 5 (7 votes) · LW · GW

The chinese room is philosophical malpractice.

Short demonstration. Searle runs on a brain. A maximally minimized alien exploratory probe - smaller than a brain cell - decides that his head would be an excellent location to hang out and study human culture, so it drills a hole in his blood-brain barrier, and hollows out a brain cell. In order to not interfere with his thinking, the probe reads incoming signals to this - now entirely hollow and non-functional cell - and send the correct response. Of course, even with femto-tech, you cannot do much social studies based out of a probe smaller than a brain cell. So, the probe spreads out and hollows out the cells in most of Searle's cerebrum so that it can decompress machinery and expert processes from it's archives. Each and every cell in Searles brain thus ends up containing alien social studies, xeno-ology, ect, ect. majors. All of whom - as an utterly minute part of their background processes shunt along the neural signals that make up Searles cognition. And none of whom pay the slightest bit of attention to doing so.

At what point during this process did Searle - who did not notice any of this - stop being a conscious entity?

Yes. The above is very silly. So is the original argument. He inserts an obviously conscious actor preforming a mechanical sub-task in a cognitive system, and then argues that because the task this actor preforms is obviously not thinking, then nothing the entire system does can be. Uhm. No.

comment by OrphanWilde · 2013-03-14T20:22:22.227Z · score: 4 (4 votes) · LW · GW

Dissolving the Chinese Room Experiment teaches you a heck of a lot about what you're intending to do.

You've just demonstrated that the experiment is flawed - but you haven't actually demonstrated -why- it is flawed. Don't just prove the idea wrong, dissolve it, figure out exactly where the mistake is made.

You'll see that it, in fact, does have considerable value to those studying AI.

Schroedinger's Cat has a lot of parallels to the Chinese Room Experiment; they both represent major hurdles to understanding, to truly dissolving the problem you intend to understand. Unfortunately a lot of people stop there, and think that the problem, as posed, represents some kind of understanding in itself.

comment by AspiringRationalist · 2013-03-15T04:02:57.525Z · score: -1 (3 votes) · LW · GW

The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don't; for example, my brain is made of neurons that don't understand English, but that doesn't mean my brain as a while doesn't. The same argument could applied to the Chinese room.

The broader failure is assuming that things that apply to one level of abstraction apply to another.

comment by TheAncientGeek · 2015-04-12T20:12:25.227Z · score: 0 (0 votes) · LW · GW

A system can possess properties that its components don't;

But a computational system can't be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?

comment by shminux · 2013-03-14T20:48:34.715Z · score: -2 (2 votes) · LW · GW

Schroedinger's Cat has a lot of parallels to the Chinese Room Experiment

...except, unlike the Chinese room one, it is not a dissolved problem, It's a real open problem in Physics.

comment by OrphanWilde · 2013-03-14T21:26:26.036Z · score: 1 (3 votes) · LW · GW

I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the "This conflicts with every intuition I have about the universe" sense.

Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn't say much of meaning about the intuitiveness of physics any longer.

Or, in other words: If you think that Schroedinger's Cat is an open problem in physics, you've made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it's just a badly misrepresented meme.

comment by TheAncientGeek · 2015-04-12T20:17:39.502Z · score: 0 (0 votes) · LW · GW

The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.

comment by shminux · 2013-03-14T21:30:06.909Z · score: -1 (3 votes) · LW · GW

Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.

comment by OrphanWilde · 2013-03-14T21:40:31.373Z · score: 1 (1 votes) · LW · GW

Born's Rule is a -bit- beyond the scope of Schroedinger's Cat. That's a bit like saying the Chinese Room Experiment isn't dissolved because we haven't solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]

comment by randallsquared · 2013-03-17T16:44:02.859Z · score: 0 (0 votes) · LW · GW

But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.

Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.

comment by OrphanWilde · 2013-03-19T14:37:14.619Z · score: 1 (1 votes) · LW · GW

Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can't implement consciousness, which it doesn't actually do.

Hence my view that it's a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.

comment by Stuart_Armstrong · 2013-03-15T09:05:56.962Z · score: 1 (1 votes) · LW · GW

Or another counterargument: http://lesswrong.com/lw/ghj/searles_cobol_room/

comment by V_V · 2013-03-15T03:14:38.448Z · score: 1 (1 votes) · LW · GW

Searle's argument applies to arbitrary computable AIs, not just GOFAI.

I don't think the argument leads to any falsifiable prediction, you are stretching it beyond its scope.

comment by Stuart_Armstrong · 2013-03-15T08:58:28.942Z · score: 0 (0 votes) · LW · GW

I don't think the argument leads to any falsifiable prediction, you are stretching it beyond its scope.

Yes, that's what I was doing. I was trying to find something falsifiable from the intuitions behind it. A lot of thought and experience went into those intuitions; it would be useful to get something out of them, when possible.

comment by V_V · 2013-03-15T21:48:02.387Z · score: 1 (1 votes) · LW · GW

Ok, but keep in mind that the hindsight bias it's hard to avoid when making predictions in the past whose outcome you already know.

Since we already know that GOFAI approaches hit diminishing returns and didn't get anywhere close to providing human-level AI, it might be tempting to say that Searle was addressing GOFAI.
But GOFAI failed because of complexity issues: both in creating explicit formal models of common knowledge and in doing inference on these formal models due to combinatorial explosion. Searle didn't refer to complexity, hence I don't think his analysis was relevant in forecasting the failure of GOFAI.

comment by Stuart_Armstrong · 2013-03-18T10:32:58.218Z · score: 0 (0 votes) · LW · GW

I didn't say they were good flasifiable predictions - just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.

comment by majorcornwallace · 2013-07-06T23:54:45.090Z · score: 1 (1 votes) · LW · GW

I think this article is interesting--but not thanks to Searle.

Essentially what we're seeing here is the Parable Effect--you can endlessly retell a story from new points of view. I'm NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining "Searle's Intuitions".

The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do "The Chinese Room". The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn't actually contain useful detail.

Obvious Problem is Obvious Even at the time of Searle's logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a "brain"--the mistakes were in trying to assume what the basic building blocks were going to be.

Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing's future (sans hypothesis).

So, while I'll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is "inspired from" but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger's Cat versus the Chinese Room. The robustness of one is immensely higher.

The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn't provide anything on the order of guidance Searles seems to suggest it does.

comment by DanArmak · 2013-03-14T21:12:17.801Z · score: 1 (1 votes) · LW · GW

Adding the reasonable assumption that it is easier to harder to simulate understanding if one doesn't actually possess it

One of these is wrong.

comment by Stuart_Armstrong · 2013-03-15T08:57:21.826Z · score: 0 (0 votes) · LW · GW

Oops, thanks! One of those is not longer wrong, because it's no longer there.

comment by carsonmcneil · 2015-04-10T02:54:24.455Z · score: 0 (0 votes) · LW · GW

Hm, I have a lot problems with Searle's argument. But even if you skip over all of the little issues, such as "The Turing Test is not a reasonable test of conscious experience", I think his biggest flaw is this assumption:

The intuition that the Chinese room follows a purely syntactic (symbol-manipulating) process rather than a semantic (understanding) one is a correct philosophical judgement.

If you begin with the theory that consciousness arises from information theoretical properties of a computation(such as Koch and Tononi's Integrated Information Theory), then while you may reach some unintuitive conclusions, you certainly don't reach any contradiction, meaning that Searle's argument is not at all a sufficient disproof of AI's conscious experience. Instead, you simply hit the conclusion that for some implementations of rulesets, the human-ruleset system IS conscious, and DOES understand Chinese, in the same sense that a native speaker does. I think we can undo the intuition scrambling by stating that the ruleset is analogous to a human brain, and the human carrying out the mindless computation is analogous to the laws of physics themselves. Do we demand that "the laws of physics" understand Chinese in order to say that a human does? Of course not. So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?

comment by Stuart_Armstrong · 2015-04-10T11:25:20.095Z · score: 0 (0 votes) · LW · GW

So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?

It doesn't.

I think the argument obscures what might be a genuine point, which I look at here: http://lesswrong.com/lw/lxi/hedoniums_semantic_problem/

comment by TheAncientGeek · 2015-04-12T20:01:32.885Z · score: -1 (1 votes) · LW · GW

in the opinion of one of the authors, the gap can be explained by positing that humans are purely syntactic beings, but that have been selected by evolution such that human mental symbols correspond with real world objects and concepts -

Finding that difficult to process. Is the correspondence supposed to be some sort of coincidental, occasionalistic thing? But why shouldn't a naturalist appeal to causation to ground symbols?

comment by Stuart_Armstrong · 2015-04-13T09:53:33.782Z · score: 1 (1 votes) · LW · GW

The causation would go as "beings with well grounded mental symbols are destroyed less often by the universe; beings with poorly grounded mental symbols are destroyed very often".

comment by TheAncientGeek · 2015-04-13T10:32:51.485Z · score: -1 (1 votes) · LW · GW

That's predictive accuracy. You can have predictive accuracy whilst badly misunderstanding the ontology of your perceived world. In fact,you van have predictive accuracy- doing the life - preserving thing in a given situation- without doing anything recogniseable as symbolic processing. And getting the ontology right is the more intuitive expansion of grounding a symbol, out of the options.

comment by Stuart_Armstrong · 2015-04-13T11:28:39.169Z · score: 1 (1 votes) · LW · GW

The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is "learning something" (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.

It's always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge. And, I'd say, having the wrong model that gives the right predictions is just the same as having the right model with randomised labels. And since the labels are pretty meaningless anyway...

comment by TheAncientGeek · 2015-04-14T18:56:20.680Z · score: -1 (1 votes) · LW · GW

The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is "learning something" (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.

That seems to merely assert what I was arguing against .. I was arguing that preditictve accuracy is orthogonal to ontological correctness...and that grounding is to do with ontological correctness.

It's always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge.

Right and wrong don't have univocal meaning, here. A random model will have poor predictive accuracy, but you can still have two models of equivalent predictive accuracy, but different ontological implications.

And, I'd say, having the wrong model that gives the right predictions is just the same as having the randomized labels.

You seem to be picturing a model as a graph with labelled vertices, and assuming that two equally good models most have the same structure. That is not so.

For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles ... although it is false, in the sense of lacking ontological accuracy, since epicycles don't exist.

Another way is to notice that ontological revolutions can make merely modest changes to predictive abilities. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two,,

In that case, there is still, a difference in empirical predictiveness. In the extreme case there is not: you can have two ontologies that always make the sane predictions, the one being dual to the other. An example is wave particle duality in quantum mechanics.

The fourth way is based on sceptical hypotheses, such as Brain in a Vat and Simulated Reality. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams's Razor is not empirical.

Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect -- and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. A further proxy can always be inserted into a series of proxies. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.

A BIV or Matrix inhabitant could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game, knowing its internal physics.The science of the Matrix inhabitants would work, in a sense, but the workability of their science would be limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality. But empiricism cannot tell us that we are not in the same situation.

In the words of Werner Heisenberg (Physics and Philosophy, 1958) "We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning"

comment by Stuart_Armstrong · 2015-04-14T19:03:01.826Z · score: 1 (1 votes) · LW · GW

We don't seem to be disagreeing about anything factual. You just want grounding to be in "the fundamental ontology", while I'm content with them being grounded in the set of everything we could observe. If you like, I'm using Occam or simplicity priors on ontologies; if there are real objects behind the ones we can observe but we never know about them, I'd still count our symbols as grounded. (that's why I'd count virtual Napoleon's symbols as being grounded in virtual Waterloo, incidentally)

comment by TheAncientGeek · 2015-04-14T19:34:17.528Z · score: -1 (1 votes) · LW · GW

Being relatively liberal about symbol grounding makes it easier to answer Searle, but harder to answer other people, such as people who think germs or atoms are just social constructs.

comment by Stuart_Armstrong · 2015-04-14T19:45:05.702Z · score: 1 (1 votes) · LW · GW

but harder to answer other people, such as people who think germs or atoms are just social constructs.

What predictions do they make when looking into microscopes or treating infectious diseases?

comment by TheAncientGeek · 2015-04-14T21:05:07.289Z · score: 0 (2 votes) · LW · GW

Exactly the sane....that is the point of predictive accuracy being orthogonal to ontological accuracy...you can vary the latter without affecting the firmer,

comment by Stuart_Armstrong · 2015-04-17T15:08:10.207Z · score: 0 (0 votes) · LW · GW

"just social constructs" is (almost always) not a purely ontological statement, though. And those who think that it's a social construct, but that the predictions of germ theories are still accurate... well, it doesn't really matter what they think, they just seem to have different labels to the rest of us for the same things.

comment by TheAncientGeek · 2015-04-18T09:30:22.312Z · score: 0 (0 votes) · LW · GW

As the author of the phrase, I meant "just social constructs" to be an ontological statement.

Are you saying they are actually realists about germs and atoms, and are stating their position dishonetly? Do you think "is real" is just a label in some unimportant way?

comment by Stuart_Armstrong · 2015-04-20T10:58:08.256Z · score: 0 (0 votes) · LW · GW

Do you think "is real" is just a label in some unimportant way?

Maybe. I'm not entirely sure what your argument is. For instance, were the matrices of matrix mechanics quantum physics "real"? Were the waves of the wave formulation of QM "real"? The two formulations are equivalent, and it doesn't seem useful to debate the reality of their individual idiosyncratic components this way.

comment by Ariel Reinheimer (ariel-reinheimer) · 2019-03-28T12:58:41.491Z · score: -2 (1 votes) · LW · GW

No divergent opinions here, just a large echo chamber ("Searl is committing philosophical malpractice"). LW in a microcosm.

Searl is a serious figure who has rock solid foundations in the areas of cognition and language. The quote above reflects a certain boorishness that is very much present in the rationalist "community."

comment by jimrandomh · 2019-03-28T20:57:31.636Z · score: 4 (2 votes) · LW · GW

Welcome to LessWrong! Generally speaking, we strongly prefer comments that address arguments directly, rather than talking about people and qualifications. That said, this is quite an old post, so it's probably too late to get much further discussion on this particular paper.