The New Riddle of Induction: Neutral and Relative Perspectives on Color
post by Anders_H · 2017-12-02T16:15:08.912Z · LW · GW · 8 commentsContents
The Problem of Induction and Its Dissolution The New Riddle of Induction: Goodman's Argument Can The Riddle Be Adequately Solved? The Grue Sleeping Beauty Problem Time Independence: Neutral and Relative Perspectives Conclusions None 9 comments
Nelson Goodman's "New Riddle of Induction" has previously been discussed on Less Wrong at http://lesswrong.com/lw/8fq/bayes and http://lesswrong.com/lw/mbr/grue . Briefly, this riddle shows that any attempt to make future predictions by generalizing from past observations, may depend on arbitrary aspects of the reasoner's language. This is illustrated in the context of the proposed time-dependent color predicates "grue" and "bleen".
In this article, I propose that the resolution to this apparent paradox lies in the recognition that a neutral perspective exists, and that while an agent cannot know with certainty whether their perspective is neutral, they can assign significantly higher credence to their perspective being neutral because evolution had no reason to introduce an arbitrary time-dependent term in their color detection algorithm. I am unsure whether the argument is novel, but as far as I can tell, this particular solution is not discussed in any of the previous literature that I have consulted.
(Note: This article was originally written as coursework, and therefore contains an extended summary of Goodman's original paper and of previous discussion in the academic literature. Readers who are familiar with the previous literature are encouraged to skip to the section "The Grue Sleeping Beauty Problem")
The Problem of Induction and Its Dissolution
In a series of lectures published as the 1954 book "Fact, Fiction and Forecast", Nelson Goodman argued that Hume's traditional problem of induction has been "dissolved", and instead described a different problem that confounds our effort to infer general facts from limited observations. Before I discuss this new problem - which Goodman termed "The New Riddle of Induction" - I briefly discuss what Goodman means when he asserts that the original problem of induction has been "dissolved".
In Goodman's view, it is true that there can be no necessary relationship between past and future observations, and it is therefore futile to expect logical certainty about any prediction. In his view, this is all there is to the problem of induction: If what you want from an inductive procedure is a logical guarantee about your prediction, then the problem of induction illustrates why you cannot have it, and it is therefore futile to spend philosophical energy worrying about knowledge or certainty that we know we can never have.
Goodman therefore argues that the real problem of induction is rather how to distinguish strong inferences from weak ones, in the sense that strong inferences are those which a reasonable person would accept, even in the absence of logical guarantees. In other words, he is interested in describing rules for a system of inference, in order to formalize the intuitions that determine whether we consider any particular prediction to be a reasonable inference from the observed data.
He models his approach to induction on a narrative about how the rules of deduction were (or continue to be?) developed. In his view, this occurs through a feedback loop or "virtuous cycle" between the inferential rules and their conclusions, such that the rules are revised if they lead to inferences we are unwilling to accept as logically valid, and such that our beliefs about the validity of an inference are revised if we are unwilling to revise the rules. Any attempt at deductive inference can then be judged by its adherence to the rules; and if a situation arises where our intuition about any particular inferential validity conflicts with the rules, we would have to adjust one of the two. In Goodman's view, a similar approach should, and is, used to continuously improve on an set of rules that capture human beliefs about what generalizations tend to produce good predictions, and what generalizations tend to fail.
However, as we will see, even if we do not seek the kind of certainty that is ruled out by the lack of logical guarantees and necessary connections, and follow Goodman to focus our attention on distinguishing "good" generalizations from bad ones, there are important obstacles that arise from the fact that our inferential mechanisms are unavoidably shaped by our language, in the sense that the predictions depend on seemingly arbitrary features of how the basic predicates are constructed.
The New Riddle of Induction: Goodman's Argument
Goodman's goal is to describe the rules that determine whether inferring s2 from s1 is inductively "valid", in the sense that a reasonable person would consider the truth of s2 to be "confirmed" by the truth of s1 even without the support of an argument that establishes s1-->s2 by deductive logic. In particular, we are interested in situations where s1 and s2 are very similar statements, where each statement makes some claim about a property that applies to some set of elements/objects, such that the property is equal between s1 and s2, but such that s2 refers to a larger, more general set of objects (i.e. where the set of elements referred to by s2 is determined by relaxing/expanding the set of elements that are referred to by s1). Goodman adopts Hempel's view that induction is described by a relation R over these statements, such that s1Rs2 holds if and only if s2 is "confirmed" by s1. The goal is then to describe the conditions that R must meet in order for a reasonable person to conclude that s1Rs2
In order to demonstrate that this is more difficult than it sounds, Goodman presents the "New Riddle Of Induction". In this thought experiment, he posits that we have observed several fine stones, and have noted that all the emeralds we have seen so far have been green. Our goal is to determine whether we would be justified in believing that all emeralds are green, or at least in predicting that the next emerald we see will be green. Let us suppose we have seen enough green emeralds to be confident that the next emerald will also be green; and we have checked that the procedure we followed to reach this conclusion meets all the rules of our inductive framework.
However, we have a friend who speaks a different language, which does not have a word for "green". Instead, they have a word for "Grue", which translates as "green before time t, blue after time t". This language also has a word "Bleen", which translates as "blue before time t, green after time t".
Since our friend saw the same emeralds as us, and since they were all observed before time t, he has correctly observed that all the emeralds he has seen so far have been grue. Moreover, since he is following the same inductive rules as us, he is predicting that the next emerald, to be observed after time t, will also be grue. Since we are following the same inductive rules, his prediction appears to be based on equally strong inference as our prediction. However, at most one of us can be correct. This raises an important challenge to the very idea of reasoning about whether an attempted inference is valid: Any set of rules, even when adhered to strictly, can lead to different predictions depending on how the basic predicates are defined. In fact, this argument can be generalized to show that for any possible future predictions, there exists a "language" such that inductive reasoning based on that language will result in the desired prediction.
Goodman goes on to argue that the basic problem is one of distinguishing "lawlike" statements from "accidental" statements, where lawlike statements are those that relate to classes of objects such that either all members have the property in question, or such that none have the property. It can then be expected that lawlike predicates are projectable, in the sense that induction based on lawlike predicates lead to valid extrapolation. However, this move does not really resolve the issue: Essentially, it just shifts the discussion down one step, to whether statements are lawlike, which it is not possible to know unless one has information that would only be available after successful inductive inference.
Can The Riddle Be Adequately Solved?
An adequate solution to Goodman's riddle would require a principled way to distinguish "projectable" predicates from non-projectable ones - a way to "cut reality at the joints" and establish natural, lawlike predicates for use in inductive reasoning.
Rudolf Carnap suggests that one can distinguish lawlike statements from accidental statements, because lawlike statements are "purely qualitative and general", in the sense that they do not restrict the spatial or temporal position of the objects that the predicates refer to. Goodman considers and rejects this argument, because from the perspective of someone who thinks in terms of predicates such as grue, grue is time-stable and green is the time-dependent predicate. Therefore, temporality itself can only be defined relative to a given perspective, and there seems to be no obvious way to give priority to one over the other.
One potential line of argument in favor of giving priority to green over grue has been suggested by several writers, who make the observation that the time "t" is seemingly chosen arbitrarily, and that there are infinitely many versions of "grue(t)", one for each possible time t, with no reason to prefer one over the other. This argument also suffers from the problem that a "grueist" person can take the same approach, and establish that there are infinitely many possible versions of "green(t)" at which time green transitions from being grue to being bleen.
In fact, this is a common theme in most arguments attempting to resolve the problem: They can generally be restated from the grueist perspective, and used to reach the opposite conclusion. Goodman therefore holds that it is impossible to distinguish lawlike statements from accidental statements on grounds of their generality. The best we can do is ask whether the statements are based on predicates that are "entrenched", in the sense that have in the past proved successful at producing accurate predictions. Such a track record then provides some support for a tentative belief that the predicate has caught onto some aspect of reality that is governed by lawlike phenomena.
The Grue Sleeping Beauty Problem
Swinburne (1968) sees an asymmetry between the predicates green and grue, which arises from the fact that an individual can judge whether an object has the property "green" even if they do not know what time it is, and argues that this asymmetry can be exploited to give priority to green over grue. I find this argument persuasive but incomplete, and will therefore discuss it in detail, in a slightly altered form.
Consider the following thought experiment, which is not due to Swinburne but which I believe illustrates his argument: An evil turtle has abducted Grue Sleeping Beauty, the princess of the Kingdom of Grue, in order to perform experiments on her. Specifically, he intends to give her sleeping pill, and then flip a coin to randomly decide whether to wake her before or after time t. In the room, there will be a green emerald. Bowser plans to ask what color the emerald is, without informing her about what time it is.
One possibility is that Grue Sleeping Beauty gets it right: She will experience the emerald as being grue if she is woken before time t, and as bleen if she is woken after time t. However, this seems unlikely: It requires that the unspecified psychical phenomenon that produces color, interacts with her qualia in a time dependent manner even when she cannot know what time it is. The other options are that she gets it wrong - which seems like a big hit against the idea of "grue" - or that she experiences no qualia at all, which seems unlikely, since all our experience tells us that non-colorblind humans can in general identify the colors of objects.
Goodman might argue that this parable begs the question, by implicitly assuming that the diamond remains green from the experimenter's perspective. One imagines his response might be to reverse the thought experiment, and instead tell a story about a grueist student of philosophy who makes up an elaborate tale about abducting Green Sleeping Beauty. Therefore, this line of reasoning is incomplete, for the same reason as most other attempted solutions to the New Riddle of Induction. However, I believe the two versions of the parable have different implications, such that a reasonable person would assign much higher credence to the implications of the first version. To explain this, in the next section I provide a "patch" to Swinburne's argument
Time Independence: Neutral and Relative Perspectives
Let us assume there is an underlying regular, lawlike phenomenon (in this case the wavelength of light reflected by emeralds), and that agents implement an algorithm which takes this phenomenon as input, and outputs a subjectively experienced color. A classifier algorithm is said to be time-independent if, for any particular wavelength, it outputs the same subjectively perceived color, regardless of time. From the perspective of an agent that implements any particular classifier algorithm, other classifier algorithms will appear relatively time-dependent if reference to time is needed to translate between their outputs.
I argue that there exists a classifier algorithm that is time-independent even from a neutral perspective: It is the one that simply ignores input on time. Therefore, a color classifier algorithm is time-independent by default, unless it actively takes time into account. Moreover, if two color classifier algorithms result in predicates that are time-dependent relative to each other, then at least one of the algorithms must contain a term that takes time into account, either directly or through contextual clues. Of course, this may be subconscious and hidden from the meta-level cognition of the agent implementing it, and the agent therefore has no direct way of knowing whether he is implementing an algorithm that is time-independent from a neutral perspective.
Both versions of the Grue/Green Sleeping Beauty thought experiment implicitly assume that the investigator is implementing a classification algorithm that is time-independent even from a neutral perspective. Since the neutral perspective is unique, at least one of them must be wrong. Finding out which one (if any) is right, is an empirical question, but unanswerable before time t, so any attempt to grant the necessary empirical observations would amount to begging the question.
But now consider an evolutionary environment where agents must choose whether to eat delicious blue berries, or poisonous green berries. There is significant selective pressure to find find an algorithm that outputs the same subjective qualia, for any given wavelength. There is no reason at all that nature should put a term for time in this algorithm: To do so, would be to add needless complexity and add an arbitrary term that makes reference to the time t, which has no contextual meaning in the setting in which the algorithm is operating and in which it is being optimized.
This points us to the central difference between the grue and green predicates: The fact that we, as real humans shaped by evolution, are implementing a particular classification algorithm is highly relevant evidence for it not containing arbitrary complications, relative to a hypothetical algorithm that, as far as we know, does not exist in any agent found in nature. The actual human algorithm is therefore more likely to ignore input on time, and more likely to be time-independent from the neutral perspective.
Note that even in the absence of a theory of optics, it is reasonable to argue that subjective experience of colors arises from some physical regularity interacting with our sensory system, and that this sensory system would be much simpler if it ignores time. Thus, the argument still holds even if the underlying physical phenomenon is poorly understood.
Conclusions
Nelson Goodman presents a riddle which illustrates how our predictions for future observations are, to some extent, functions of the seemingly arbitrary categorization scheme used in our mental representation of reality. A common theme between many of the suggested solutions to this riddle, is that they attempt to find an asymmetry between languages that use the predicate grue, and languages that use the predicate green. Such an asymmetry would have to be immune to predictive changes that arise when rephrasing the problem in the other language; it seems likely that no such asymmetry can be found on purely logical or semantic grounds.
Instead, I argue that one can bring in additional background beliefs in support of the conclusion that the reference frame implemented by humans is "neutral". In particular, the human color classifier algorithm was chosen by evolution, which had no reason to include a term for time. This licenses me to give significant higher credence to the belief that my representation scheme is neutral, and that hypothetical other classification algorithms that result in constructs such as grue are time-dependent even from a neutral perspective. In some sense, this line of reasoning could be interpreted as an extension of Goodman's original argument about entrenchment, but allows the entrenchment to have occurred in evolutionary history.
Despite the fact that a solution seems possible, the riddle is still important, since an agent can only compensate for the uncertainty in his predictions that results from the reference frame of his predicates, if he has a clear understanding of the problems highlighted by Goodman.
8 comments
Comments sorted by top scores.
comment by DanB · 2017-12-02T17:52:42.010Z · LW(p) · GW(p)
Interesting analysis. I hadn't heard of Goodman before so I appreciate the reference.
In my view the problem of induction has been almost entirely solved by the ideas from the literature on statistical learning, such as VC theory, MDL, Solomonoff induction, and PAC learning. You might disagree, but you should probably talk about why those ideas prove insufficient in your view if you want to convince people (especially if your audience is up-to-date on ML).
One particularly glaring limitation with Goodman's argument is that it depends on natural language predicates ("green", "grue", etc). Natural language is terribly ambiguous and imprecise, which makes it hard to evaluate philosophical statements about natural language predicates. You'd be better off casting the discussion in terms of computer programs, that take a given set of input observations and produce an output prediction.
Of course you could write "green" and "grue" as computer functions, but it would be immediately obvious how much more contrived the program using "grue" is than the program using "green".
Replies from: Anders_H, ivan-1↑ comment by Anders_H · 2017-12-02T18:56:46.422Z · LW(p) · GW(p)
In my view, "the problem of induction" is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.
I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.
Consider two humans who are both writing computer functions. Both the "green" and the "grue" programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.
Replies from: TAG↑ comment by TAG · 2022-07-02T17:03:23.693Z · LW(p) · GW(p)
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty.
Being able to make only probablistic predictions (but understanding how that works) is one thing. Being able to make only probablistic predictions, and not even understanding how that works, is another thing.
Replies from: Big Steve↑ comment by Ivan (ivan-1) · 2021-02-06T02:38:30.348Z · LW(p) · GW(p)
comment by Big Steve · 2022-07-02T16:51:11.516Z · LW(p) · GW(p)
Your solution doesn't work, because you assume there is a lawlike phenomenon. Goodman wrote, "Plainly, then, we must look for a way of distinguishing lawlike from accidental statements." This means you can't just assume there is a lawlike phenomenon. Rather, you must offer some means of distinguishing lawlike from accidental phenomena.
Replies from: Anders_H↑ comment by Anders_H · 2022-07-04T11:11:10.102Z · LW(p) · GW(p)
I don't think the existence of lawlike phenomena is controversial, at least not on this forum. Otherwise, how do you account for the remarkable patterns to our observations? Of course, it is not possible to determine what those phenomena are, but I don't think my solution requires this. It just requires that our sensory algorithm responds the same way every time.
comment by entirelyuseless2 · 2017-12-02T17:12:08.702Z · LW(p) · GW(p)
This seems obviously circular, since you depend on using induction based on human languages to conclude that humans were produced by an evolutionary process.
Replies from: Anders_H↑ comment by Anders_H · 2017-12-02T17:20:29.841Z · LW(p) · GW(p)
I am not sure I fully understand this comment, or why you believe my argument is circular. It is possible that you are right, but I would very much appreciate a more thorough explanation.
In particular, I am not "concluding" that humans were produced by an evolutionary process; but rather using it as background knowledge. Moreover, this statement seems uncontroversial enough that I can bring it in as a premise without having to argue for it.
Since "humans were produced by an evolutionary process" is a premise and not a conclusion, I don't understand what you mean by circular reasoning.