AI prediction case study 3: Searle's Chinese room
post by Stuart_Armstrong · 2013-03-13T12:44:38.095Z · LW · GW · Legacy · 36 commentsContents
Locked up in Searle's Chinese room References: None 36 comments
Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligence conference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.
The prediction classification shemas can be found in the first case study.
Locked up in Searle's Chinese room
- Classification: issues and metastatements and a scenario, using philosophical arguments and expert judgement.
Searle's Chinese room thought experiment is a famous critique of some of the assumptions of 'strong AI' (which Searle defines as the belief that 'the appropriately programmed computer literally has cognitive states). There has been a lot of further discussion on the subject (see for instance (Sea90,Har01)), but, as in previous case studies, this section will focus exclusively on his original 1980 publication (Sea80).
In the key thought experiment, Searle imagined that AI research had progressed to the point where a computer program had been created that could demonstrate the same input-output performance as a human - for instance, it could pass the Turing test. Nevertheless, Searle argued, this program would not demonstrate true understanding. He supposed that the program's inputs and outputs were in Chinese, a language Searle couldn't understand. Instead of a standard computer program, the required instructions were given on paper, and Searle himself was locked in a room somewhere, slavishly following the instructions and therefore causing the same input-output behaviour as the AI. Since it was functionally equivalent to the AI, the setup should, from the 'strong AI' perspective, demonstrate understanding if and only if the AI did. Searle then argued that there would be no understanding at all: he himself couldn't understand Chinese, and there was no-one else in the room to understand it either.
The whole argument depends on strong appeals to intuition (indeed D. Dennet went as far as accusing it of being an 'intuition pump' (Den91)). The required assumptions are:
- The Chinese room setup analogy preserves the relevant properties of the AI's program.
- Intuitive reasoning about the Chinese room is thus relevant reasoning about algorithms.
- The intuition that the Chinese room follows a purely syntactic (symbol-manipulating) process rather than a semantic (understanding) one is a correct philosophical judgement.
- The intuitive belief that humans follow semantic processes is however correct.
Thus the Chinese room argument is unconvincing to those that don't share Searle's intuitions. It cannot be accepted solely on Searle's philosophical expertise, as other philosophers disagree (Den91,Rey86). On top of this, Searle is very clear that his thought experiment doesn't put any limits on the performance of AIs (he argues that even a computer with all the behaviours of a human being would not demonstrate true understanding). Hence the Chinese room seems to be useless for AI predictions. Can useful prediction nevertheless be extracted from it?
These need not come directly from the main thought experiment, but from some of the intuitions and arguments surrounding it. Searle's paper presents several interesting arguments, and it is interesting to note that many of them are disconnected from his main point. For instance, errors made in 1980 AI research should be irrelevant to the Chinese Room - a pure thought experiment. Yet Searle argues about these errors, and there is at least an intuitive if not a logical connection to his main point. There are actually several different arguments in Searle's paper, not clearly divided from each other, and likely to be rejected or embraced depending on the degree of overlap with Searle's intuitions. This may explain why many philosophers have found Searle's paper so complex to grapple with.
One feature Searle highlights is the syntactic-semantic gap. If he is correct, and such a gap exists, this demonstrates the possibility of further philosophical progress in the area (in the opinion of one of the authors, the gap can be explained by positing that humans are purely syntactic beings, but that have been selected by evolution such that human mental symbols correspond with real world objects and concepts - one possible explanation among very many). For instance, Searle directly criticises McCarthy's contention that ''Machines as simple as thermostats can have beliefs'' (McC79). If one accepted Searle's intuition there, one could then ask whether more complicated machines could have beliefs, and what attributes they would need. These should be attributes that it would be useful to have in an AI. Thus progress in 'understanding understanding' would likely make it easier to go about designing AI - but only if Searle's intuition is correct that AI designers do not currently grasp these concepts.
That can be expanded into a more general point. In Searle's time, the dominant AI paradigm was GOFAI (Good Old-Fashioned Artificial Intelligence (Hau85)), which focused on logic and symbolic manipulation. Many of these symbols had suggestive labels: SHRDLU, for instance, had a vocabulary that included 'red', 'block', 'big' and 'pick up' (Win71). Searle's argument can be read, in part, as a claim that these suggestive labels did not in themselves impart true understanding of the concepts involved - SHRDLU could parse ''pick up a big red block'' and respond with an action that seems appropriate, but could not understand those concepts in a more general environment. The decline of GOFAI since the 1980's cannot be claimed as vindication of Searle's approach, but it at least backs up his intuition that these early AI designers were missing something.
Another falsifiable prediction can be extracted, not from the article but from the intuitions supporting it. If formal machines do not demonstrate understanding, but brains (or brain-like structures) do, this would lead to certain scenario predictions. Suppose two teams were competing to complete an AI that will pass the Turing test. One team was using standard programming techniques on computer, the other were building it out of brain (or brain-like) components. Apart from this, there is no reason to prefer one team over the other.
According to Searle's intuition, any AI made by the first project will not demonstrate true understanding, while those of the second project may. Adding the reasonable assumption that it is harder to simulate understanding if one doesn't actually possess it, one is lead to the prediction that the second team is more likely to succeed.
Thus there are three predictions that can be extracted from the Chinese room paper:
- Philosophical progress in understanding the syntactic-semantic gap may help towards designing better AIs.
- GOFAI's proponents incorrectly misattribute understanding and other high level concepts to simple symbolic manipulation machines, and will not succeed with their approach.
- An AI project that uses brain-like components is more likely to succeed (everything else being equal) than one based on copying the functional properties of the mind.
Therefore one can often extract predictions from even the most explicitly anti-predictive philosophy of AI paper.
References:
- [Arm] Stuart Armstrong. General purpose intelligence: arguing the orthogonality thesis. In preparation.
- [ASB12] Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299-324, 2012.
- [BBJ+03] S. Bleich, B. Bandelow, K. Javaheripour, A. Muller, D. Degner, J. Wilhelm, U. Havemann-Reinecke, W. Sperling, E. Ruther, and J. Kornhuber. Hyperhomocysteinemia as a new risk factor for brain shrinkage in patients with alcoholism. Neuroscience Letters, 335:179-182, 2003.
- [Bos13] Nick Bostrom. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. forthcoming in Minds and Machines, 2013.
- [Cre93] Daniel Crevier. AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, New York, 1993.
- [Den91] Daniel Dennett. Consciousness Explained. Little, Brown and Co., 1991.
- [Deu12] D. Deutsch. The very laws of physics imply that artificial intelligence must be possible. what's holding us up? Aeon, 2012.
- [Dre65] Hubert Dreyfus. Alchemy and ai. RAND Corporation, 1965.
- [eli66] Eliza-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9:36-45, 1966.
- [Fis75] Baruch Fischho. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1:288-299, 1975.
- [Gui11] Erico Guizzo. IBM's Watson jeopardy computer shuts down humans in final game. IEEE Spectrum, 17, 2011.
- [Hal11] J. Hall. Further reflections on the timescale of ai. In Solomonoff 85th Memorial Conference, 2011.
- [Han94] R. Hanson. What if uploads come first: The crack of a future dawn. Extropy, 6(2), 1994.
- [Har01] S. Harnad. What's wrong and right about Searle's Chinese room argument? In M. Bishop and J. Preston, editors, Essays on Searle's Chinese Room Argument. Oxford University Press, 2001.
- [Hau85] John Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cambridge, Mass., 1985.
- [Hof62] Richard Hofstadter. Anti-intellectualism in American Life. 1962.
- [Kah11] D. Kahneman. Thinking, Fast and Slow. Farra, Straus and Giroux, 2011.
- [KL93] Daniel Kahneman and Dan Lovallo. Timid choices and bold forecasts: A cognitive perspective on risk taking. Management science, 39:17-31, 1993.
- [Kur99] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult, 1999.
- [McC79] J. McCarthy. Ascribing mental qualities to machines. In M. Ringle, editor, Philosophical Perspectives in Artificial Intelligence. Harvester Press, 1979.
- [McC04] Pamela McCorduck. Machines Who Think. A. K. Peters, Ltd., Natick, MA, 2004.
- [Min84] Marvin Minsky. Afterword to Vernor Vinges novel, "True names." Unpublished manuscript. 1984.
- [Moo65] G. Moore. Cramming more components onto integrated circuits. Electronics, 38(8), 1965.
- [Omo08] Stephen M. Omohundro. The basic ai drives. Frontiers in Artificial Intelligence and applications, 171:483-492, 2008.
- [Pop] Karl Popper. The Logic of Scientific Discovery. Mohr Siebeck.
- [Rey86] G. Rey. What's really going on in Searle's Chinese room". Philosophical Studies, 50:169-185, 1986.
- [Riv12] William Halse Rivers. The disappearance of useful arts. Helsingfors, 1912.
- [San08] A. Sandberg. Whole brain emulations: a roadmap. Future of Humanity Institute Technical Report, 2008-3, 2008.
- [Sea80] J. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3(3):417-457, 1980.
- [Sea90] John Searle. Is the brain's mind a computer program? Scientific American, 262:26-31, 1990.
- [Sim55] H.A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69:99-118, 1955.
- [Tur50] A. Turing. Computing machinery and intelligence. Mind, 59:433-460, 1950.
- [vNM44] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ, Princeton University Press, 1944.
- [Wal05] Chip Walter. Kryder's law. Scientific American, 293:32-33, 2005.
- [Win71] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. MIT AI Technical Report, 235, 1971.
- [Yam12] Roman V. Yampolskiy. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19:194-214, 2012.
- [Yud08] Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. In Nick Bostrom and Milan M. Ćirković, editors, Global catastrophic risks, pages 308-345, New York, 2008. Oxford University Press.
36 comments
Comments sorted by top scores.
comment by Viliam_Bur · 2013-03-13T16:00:39.650Z · LW(p) · GW(p)
SHRDLU could parse ''pick up a big red block'' and respond with an action that seems appropriate, but could not understand those concepts in a more general environment.
Could the same thing be also true for humans, only that we have much larger environment where we can function properly?
When we discuss many-worlds interpretation, the meaning of probability, whether atoms have identity, et cetera... a superintelligence could observe our attempts and say: "Humans have some symbols hardcoded by evolution, they can also do some simple reasoning about them, but when they get too far away from their natural environment, they are completely lost."
comment by Izeinwinter · 2013-03-14T17:02:06.544Z · LW(p) · GW(p)
The chinese room is philosophical malpractice.
Short demonstration. Searle runs on a brain. A maximally minimized alien exploratory probe - smaller than a brain cell - decides that his head would be an excellent location to hang out and study human culture, so it drills a hole in his blood-brain barrier, and hollows out a brain cell. In order to not interfere with his thinking, the probe reads incoming signals to this - now entirely hollow and non-functional cell - and send the correct response. Of course, even with femto-tech, you cannot do much social studies based out of a probe smaller than a brain cell. So, the probe spreads out and hollows out the cells in most of Searle's cerebrum so that it can decompress machinery and expert processes from it's archives. Each and every cell in Searles brain thus ends up containing alien social studies, xeno-ology, ect, ect. majors. All of whom - as an utterly minute part of their background processes shunt along the neural signals that make up Searles cognition. And none of whom pay the slightest bit of attention to doing so.
At what point during this process did Searle - who did not notice any of this - stop being a conscious entity?
Yes. The above is very silly. So is the original argument. He inserts an obviously conscious actor preforming a mechanical sub-task in a cognitive system, and then argues that because the task this actor preforms is obviously not thinking, then nothing the entire system does can be. Uhm. No.
Replies from: OrphanWilde, Stuart_Armstrong↑ comment by OrphanWilde · 2013-03-14T20:22:22.227Z · LW(p) · GW(p)
Dissolving the Chinese Room Experiment teaches you a heck of a lot about what you're intending to do.
You've just demonstrated that the experiment is flawed - but you haven't actually demonstrated -why- it is flawed. Don't just prove the idea wrong, dissolve it, figure out exactly where the mistake is made.
You'll see that it, in fact, does have considerable value to those studying AI.
Schroedinger's Cat has a lot of parallels to the Chinese Room Experiment; they both represent major hurdles to understanding, to truly dissolving the problem you intend to understand. Unfortunately a lot of people stop there, and think that the problem, as posed, represents some kind of understanding in itself.
Replies from: AspiringRationalist, shminux↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-03-15T04:02:57.525Z · LW(p) · GW(p)
The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don't; for example, my brain is made of neurons that don't understand English, but that doesn't mean my brain as a while doesn't. The same argument could applied to the Chinese room.
The broader failure is assuming that things that apply to one level of abstraction apply to another.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-12T20:12:25.227Z · LW(p) · GW(p)
A system can possess properties that its components don't;
But a computational system can't be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?
↑ comment by Shmi (shminux) · 2013-03-14T20:48:34.715Z · LW(p) · GW(p)
Schroedinger's Cat has a lot of parallels to the Chinese Room Experiment
...except, unlike the Chinese room one, it is not a dissolved problem, It's a real open problem in Physics.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-14T21:26:26.036Z · LW(p) · GW(p)
I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the "This conflicts with every intuition I have about the universe" sense.
Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn't say much of meaning about the intuitiveness of physics any longer.
Or, in other words: If you think that Schroedinger's Cat is an open problem in physics, you've made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it's just a badly misrepresented meme.
Replies from: TheAncientGeek, shminux↑ comment by TheAncientGeek · 2015-04-12T20:17:39.502Z · LW(p) · GW(p)
The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.
↑ comment by Shmi (shminux) · 2013-03-14T21:30:06.909Z · LW(p) · GW(p)
Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-14T21:40:31.373Z · LW(p) · GW(p)
Born's Rule is a -bit- beyond the scope of Schroedinger's Cat. That's a bit like saying the Chinese Room Experiment isn't dissolved because we haven't solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
Replies from: randallsquared↑ comment by randallsquared · 2013-03-17T16:44:02.859Z · LW(p) · GW(p)
But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-19T14:37:14.619Z · LW(p) · GW(p)
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can't implement consciousness, which it doesn't actually do.
Hence my view that it's a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.
↑ comment by Stuart_Armstrong · 2013-03-15T09:05:56.962Z · LW(p) · GW(p)
Or another counterargument: http://lesswrong.com/lw/ghj/searles_cobol_room/
comment by V_V · 2013-03-15T03:14:38.448Z · LW(p) · GW(p)
Searle's argument applies to arbitrary computable AIs, not just GOFAI.
I don't think the argument leads to any falsifiable prediction, you are stretching it beyond its scope.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-15T08:58:28.942Z · LW(p) · GW(p)
I don't think the argument leads to any falsifiable prediction, you are stretching it beyond its scope.
Yes, that's what I was doing. I was trying to find something falsifiable from the intuitions behind it. A lot of thought and experience went into those intuitions; it would be useful to get something out of them, when possible.
Replies from: V_V↑ comment by V_V · 2013-03-15T21:48:02.387Z · LW(p) · GW(p)
Ok, but keep in mind that the hindsight bias it's hard to avoid when making predictions in the past whose outcome you already know.
Since we already know that GOFAI approaches hit diminishing returns and didn't get anywhere close to providing human-level AI, it might be tempting to say that Searle was addressing GOFAI.
But GOFAI failed because of complexity issues: both in creating explicit formal models of common knowledge and in doing inference on these formal models due to combinatorial explosion. Searle didn't refer to complexity, hence I don't think his analysis was relevant in forecasting the failure of GOFAI.
↑ comment by Stuart_Armstrong · 2013-03-18T10:32:58.218Z · LW(p) · GW(p)
I didn't say they were good flasifiable predictions - just that they were there. And it was a cogent critique that people were using misleading terms which implied their programs had more implicit capacity than they actually did.
Replies from: majorcornwallace↑ comment by majorcornwallace · 2013-07-06T23:54:45.090Z · LW(p) · GW(p)
I think this article is interesting--but not thanks to Searle.
Essentially what we're seeing here is the Parable Effect--you can endlessly retell a story from new points of view. I'm NOT suggesting this is a horrible thing as anything that makes you think is, in my opinion, a good step. This is my interpretation of what the Stuart is doing when mining "Searle's Intuitions".
The weakness of parables, though, is they are entirely impressionistic. This is why I give more credit to Stuart in his exploration than I do "The Chinese Room". The CR parable is technically horrifically flawed. Also, the implications that it points to not specific issues with AI but rather vague ones is another example of how a parable may indicate a broad swath of observation but doesn't actually contain useful detail.
Obvious Problem is Obvious Even at the time of Searle's logical analysis most computer scientists entering the field understood they were up against a huge wall of complexity when it came to AI. I think the mistakes were not so much in realizing that an IBM 360 or, for that matter, a smaller circuit were going contain a "brain"--the mistakes were in trying to assume what the basic building blocks were going to be.
Because the actual weaknesses in approach are empirical in nature the actual refinements over time in AI research are not about philosophical impossibilities but rather just data and theoretics. Dennet, as an example, tries to frame computing theory so-as to make the theoretics more understandable. He does this rather than to a priori deduce the actual nature of computing's future (sans hypothesis).
So, while I'll agree that anyone can use any narrative as a kickstarter to thinking the value of the original narrative is going to be not just where the narrative is "inspired from" but also the details of actual empircal relevance involved. This is a stark contrast of, say, Schrodinger's Cat versus the Chinese Room. The robustness of one is immensely higher.
The flip-side is AI researchers can easily ignore the Chinese Room entirely without risk of blundering. The parable actually doesn't provide anything on the order of guidance Searles seems to suggest it does.
comment by DanArmak · 2013-03-14T21:12:17.801Z · LW(p) · GW(p)
Adding the reasonable assumption that it is easier to harder to simulate understanding if one doesn't actually possess it
One of these is wrong.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-15T08:57:21.826Z · LW(p) · GW(p)
Oops, thanks! One of those is not longer wrong, because it's no longer there.
comment by carsonmcneil · 2015-04-10T02:54:24.455Z · LW(p) · GW(p)
Hm, I have a lot problems with Searle's argument. But even if you skip over all of the little issues, such as "The Turing Test is not a reasonable test of conscious experience", I think his biggest flaw is this assumption:
The intuition that the Chinese room follows a purely syntactic (symbol-manipulating) process rather than a semantic (understanding) one is a correct philosophical judgement.
If you begin with the theory that consciousness arises from information theoretical properties of a computation(such as Koch and Tononi's Integrated Information Theory), then while you may reach some unintuitive conclusions, you certainly don't reach any contradiction, meaning that Searle's argument is not at all a sufficient disproof of AI's conscious experience. Instead, you simply hit the conclusion that for some implementations of rulesets, the human-ruleset system IS conscious, and DOES understand Chinese, in the same sense that a native speaker does. I think we can undo the intuition scrambling by stating that the ruleset is analogous to a human brain, and the human carrying out the mindless computation is analogous to the laws of physics themselves. Do we demand that "the laws of physics" understand Chinese in order to say that a human does? Of course not. So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-10T11:25:20.095Z · LW(p) · GW(p)
So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?
It doesn't.
I think the argument obscures what might be a genuine point, which I look at here: http://lesswrong.com/lw/lxi/hedoniums_semantic_problem/
comment by TheAncientGeek · 2015-04-12T20:01:32.885Z · LW(p) · GW(p)
in the opinion of one of the authors, the gap can be explained by positing that humans are purely syntactic beings, but that have been selected by evolution such that human mental symbols correspond with real world objects and concepts -
Finding that difficult to process. Is the correspondence supposed to be some sort of coincidental, occasionalistic thing? But why shouldn't a naturalist appeal to causation to ground symbols?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-13T09:53:33.782Z · LW(p) · GW(p)
The causation would go as "beings with well grounded mental symbols are destroyed less often by the universe; beings with poorly grounded mental symbols are destroyed very often".
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-13T10:32:51.485Z · LW(p) · GW(p)
That's predictive accuracy. You can have predictive accuracy whilst badly misunderstanding the ontology of your perceived world. In fact,you van have predictive accuracy- doing the life - preserving thing in a given situation- without doing anything recogniseable as symbolic processing. And getting the ontology right is the more intuitive expansion of grounding a symbol, out of the options.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-13T11:28:39.169Z · LW(p) · GW(p)
The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is "learning something" (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.
It's always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge. And, I'd say, having the wrong model that gives the right predictions is just the same as having the right model with randomised labels. And since the labels are pretty meaningless anyway...
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-14T18:56:20.680Z · LW(p) · GW(p)
The more complex your model, and the more complex reality is, the closer the correspondence between them, and the more your internal model acts as if it is "learning something" (making incorrect predictions, processing the data, then making better ones), the less scope these is for your symbols to be ungrounded.
That seems to merely assert what I was arguing against .. I was arguing that preditictve accuracy is orthogonal to ontological correctness...and that grounding is to do with ontological correctness.
It's always possible, but the level of coincidence needed to have the wrong model that behave exactly the same as the right one is huge.
Right and wrong don't have univocal meaning, here. A random model will have poor predictive accuracy, but you can still have two models of equivalent predictive accuracy, but different ontological implications.
And, I'd say, having the wrong model that gives the right predictions is just the same as having the randomized labels.
You seem to be picturing a model as a graph with labelled vertices, and assuming that two equally good models most have the same structure. That is not so.
For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles ... although it is false, in the sense of lacking ontological accuracy, since epicycles don't exist.
Another way is to notice that ontological revolutions can make merely modest changes to predictive abilities. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two,,
In that case, there is still, a difference in empirical predictiveness. In the extreme case there is not: you can have two ontologies that always make the sane predictions, the one being dual to the other. An example is wave particle duality in quantum mechanics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and Simulated Reality. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams's Razor is not empirical.
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect -- and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. A further proxy can always be inserted into a series of proxies. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix inhabitant could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game, knowing its internal physics.The science of the Matrix inhabitants would work, in a sense, but the workability of their science would be limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality. But empiricism cannot tell us that we are not in the same situation.
In the words of Werner Heisenberg (Physics and Philosophy, 1958) "We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning"
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-14T19:03:01.826Z · LW(p) · GW(p)
We don't seem to be disagreeing about anything factual. You just want grounding to be in "the fundamental ontology", while I'm content with them being grounded in the set of everything we could observe. If you like, I'm using Occam or simplicity priors on ontologies; if there are real objects behind the ones we can observe but we never know about them, I'd still count our symbols as grounded. (that's why I'd count virtual Napoleon's symbols as being grounded in virtual Waterloo, incidentally)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-14T19:34:17.528Z · LW(p) · GW(p)
Being relatively liberal about symbol grounding makes it easier to answer Searle, but harder to answer other people, such as people who think germs or atoms are just social constructs.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-14T19:45:05.702Z · LW(p) · GW(p)
but harder to answer other people, such as people who think germs or atoms are just social constructs.
What predictions do they make when looking into microscopes or treating infectious diseases?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-14T21:05:07.289Z · LW(p) · GW(p)
Exactly the sane....that is the point of predictive accuracy being orthogonal to ontological accuracy...you can vary the latter without affecting the firmer,
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-17T15:08:10.207Z · LW(p) · GW(p)
"just social constructs" is (almost always) not a purely ontological statement, though. And those who think that it's a social construct, but that the predictions of germ theories are still accurate... well, it doesn't really matter what they think, they just seem to have different labels to the rest of us for the same things.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-18T09:30:22.312Z · LW(p) · GW(p)
As the author of the phrase, I meant "just social constructs" to be an ontological statement.
Are you saying they are actually realists about germs and atoms, and are stating their position dishonetly? Do you think "is real" is just a label in some unimportant way?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-04-20T10:58:08.256Z · LW(p) · GW(p)
Do you think "is real" is just a label in some unimportant way?
Maybe. I'm not entirely sure what your argument is. For instance, were the matrices of matrix mechanics quantum physics "real"? Were the waves of the wave formulation of QM "real"? The two formulations are equivalent, and it doesn't seem useful to debate the reality of their individual idiosyncratic components this way.
comment by Ariel Reinheimer (ariel-reinheimer) · 2019-03-28T12:58:41.491Z · LW(p) · GW(p)
No divergent opinions here, just a large echo chamber ("Searl is committing philosophical malpractice"). LW in a microcosm.
Searl is a serious figure who has rock solid foundations in the areas of cognition and language. The quote above reflects a certain boorishness that is very much present in the rationalist "community."
Replies from: jimrandomh↑ comment by jimrandomh · 2019-03-28T20:57:31.636Z · LW(p) · GW(p)
Welcome to LessWrong! Generally speaking, we strongly prefer comments that address arguments directly, rather than talking about people and qualifications. That said, this is quite an old post, so it's probably too late to get much further discussion on this particular paper.