Posts
Comments
That critique might sound good in theory, but I think it falls flat in practice. Hearsay is a rule with more than 30 exceptions, many of which seem quite technical and arbitrary. But I have seen no evidence that the public views legal systems that employ this sort of convoluted hearsay regime as less legitimate than legal systems that take a more naturalistic, Benthamite approach.
In practice, even laypeople who are participating in trials don't really see the doctrine that lies beneath the surface of evidentiary rulings, so I doubt they form their judgments of the system's legitimacy based on such details.
A few comments:
It is somewhat confusing (at least to legal readers) that you use legal terms in non-standard ways. Conflating confrontation with hearsay issues is confusing because making people available for cross-examination solves the confrontation problem but not always the hearsay one.
I like your emphasis on the filtering function of evidentiary rules. Keep in mind, however, that these rules have little effect in bench trials (which are more common than jury trials in state courts of general jurisdiction). And relatively few cases reach trial at all; more are disposed of by pretrial motions or by settlements. (For some data, you could check out this paper by Marc Galanter.) So this filtering process is only rarely applied in real-world cases!
Before suggesting that we should exclude evidence of low reliability, you should probably take more time to think about substitution effects. If lawyers cannot use multiply embedded hearsay, what will juries hear instead? Also, you would want to establish that juries would systematically err in their use of such evidence. It is not a problem to have unreliable evidence come in if juries in fact recognize its unreliability.
I've recently spent some time thinking about how we might apply the scientific method towards designing better rules of legal procedure and evidence. It turns out to be trickier than you might think, largely because it is hard to measure the impact of legal rules on the accuracy of case resolutions. If you are curious about such things (and with apologies for blatant self promotion), you might want to read some of what I wrote here, particularly parts 2-4.
Good points.
This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may give up looking for the right answer.
The problem is that smart people don't give much credit to informed expressions of agreement when parceling out status. The heroic falsfier, or the proposer of the great new idea, get all the glory.
Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field's practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.
Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.
So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.
p.s. Peer review and citation counting are probably incestuous, but I don't think the charge makes sense in the expert witness evaluation context.
True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.
Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?
Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.
One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.
Care to explain the basis for your skepticism?
Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).
Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.
Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.
Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence and arguments before a decisionmaker. And that decisionmaker is subject to strong social pressures not to seek to affiliate with the biased parties. Finally, in many instances the decisionmaker must provide specific reasons for rejecting the parties' evidence and arguments, and make this reasoning available for public scrutiny.
The system, in short, works by encouraging individual bias in service of greater systemic rationality.
Words can become less useful when they attach to too much as well as too little. A perfectly drawn map that indicates only the position and exact shape of North America will often be less useful than a less-accurate map that gives the approximate location of its major roads and cities. Similarly, a very clearly drawn map that does not correspond to the territory it describes is useless. So defining terms clearly is only one part of the battle in crafting good arguments; you also need terms that map well onto the actual territory and that do so at a useful level of generality.
The problem with the term "rationality" isn't that no one knows what it means; there seems to be wide agreement on a number of tokens of rational behavior and a number of tokens if irrational behavior. Rather, the problem is that the term is so unspecific and so emotionally loaded that it obstructs rather than furthers discussion.
Imagining that someone "had a reason to seriously present" to Obama-Mammoth hypothesis is to make the hypothesis non-absurd. If there is real evidence in favor of the hypothesis, than it is obviously worth considering. But that is just to fight the example; it doesn't tell us much about the actual line between absurd claims and claims that are worth considering.
In the world we actually inhabit, an individual who believed that they had good reasons to think that the president was an extinct quadruped would obviously be suffering from a thought disorder. It might be interesting to listen to such a person talk (or to hear a joke that begins with the O-M Hypo), but that doesn't mean that the claim is worth considering seriously.
Christianity is false, but it is harder to falsify it then it is to show that Barrack Obama is not a non-sapient extinct mammal. I can prove the second false to a five-year-old of average intelligence by showing a picture of Obama and an artist's rendition of a mammoth. It would take some time to explain to the same five-year-old child why Christianity does not make sense as a description of the world.
This difference—that while both claims are false, one claim is much more obviously false than the other—explains why Christianity has many adherents but the Obama-Mammoth hypothesis does not. And we can usually infer from the fact that many people believe a proposition that it is not transparently false, making it more reasonable to investigate a bit before rejecting it.
A good reason to take this suggestion to heart: The terms "rationality" and "rational" have a strong positive value for most participants here—stronger, I think, than the value we attach to words like "truth-seeking" or "winning." This distorts discussion and argument; we push overhard to assert that things we like or advocate are "rational" in part because it feels good to associate our ideas with the pretty word.
If you particularize the conversation—i.e., you are likely to get more money by one-boxing on Newcomb's problem, or you are likely to hold more accurate beliefs if you update your probability estimates based solely on the disagreement of informed others—than it is less likely that you will grow overattached to particular procedures of analysis that you have previously given an attractive label.
Not necessarily. The vast majority of propositions are false. Most of them are obviously false; we don't need to spend much mental energy to reject the hypothesis that "Barack Obama is a wooly mammoth," or "the moon is made of butternut squash." "Absurd" is a useful label for statements that we can reject with minimal mental effort. And it makes sense that we refuse to consider most such statements; our mental time and energy is very limited, and if we want to live productive lives, we have to focus on things that have some decent probability of being true.
The problem is not denominating certain things as absurd, it is rejecting claims that we have reason to take more seriously. Both evolution and Christianity are believed by large enough communities that we should not reject either claim as "absurd." Rather, when many people believe something, we should attend to the best arguments in favor of those beliefs before we decide whether we disagree.
The fact that you do not value something does not serve very well as an argument for why others should stop valuing it. For those of us who do experience a conflict between a desire to deter and a desire to punish fairly, you have not explained why we should prioritize the first goal over the second when trying to reduce this conflict.
We have at least two goals when we punish: to prevent the commission of antisocial acts (by deterrence or incapacitation) and to express our anger at the breach of social norms. On what basis should we decide that the first type of goal takes priority over the second type, when the two conflict? You seem to assume that we are somehow mistaken when we punish more or less than deterrence requires; perhaps the better conclusion is that our desire to punish is more driven by retributive goals than it is by utilitarian ones, as Sunstein et al. suggest.
In other words, if two of our terminal values are conflicting, it is hard to see a principled basis for choosing which one to modify in order to reduce the conflict.
On this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence.
Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?
There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own mental capacities to determine whether or not I'm deceiving myself concerning so simple a matter as my favorite color.
I do not have the authority to say, "Jane likes the color green." I may know Jane quite well, and the probability of my statement being accurate may be quite high, but my saying it is so does not make it so.
You do not cause yourself to like the color green merely by saying that you do. You are describing yourself, but the act of description does not make the description correct. You could speak falsely, but doing so would not change your preferences as to color.
There are some sentence-types that correspond to your concept of "authority." If I accept your offer to create a contract by saying, "we have a contract," I have in fact made is so by speaking. Likewise, "I now pronounce you man and wife." See J.L. Austin's "How to Do Things With Words" for more examples of this. The philosophy of language term for talking like this is that you are making a "performative utterance," because by speaking you are in fact performing an act, rather than merely describing the world.
But our speech conventions do not require us to speak performatively in order to make flat assertions. If it is raining outside, I can say, "it is raining," even though my saying so doesn't make it so. I think the mistake you are making is in assuming that we cannot assert that something is true unless we are 100% confident in our assertion.
Sure, it is useful to ask for clarification when we don't understand what someone is saying. But we don't need to settle on one "correct" meaning of the term in order to accomplish this. We can just recognize that the word is used to refer to a combination of characteristics that cognitive activity might possess. I.e. "rationality" usually refers to thinking that is correct, clear, justified by available evidence, free of logical errors, non-circular, and goal-promoting. Sometimes this general sense may not be specific enough, particularly where different aspects of rationality conflict with each other. But then we should use other words, not seek to make rationality into a different concept.
It depends how much relative value you assign to the following things:
- Increasing your well-being and life satisfaction.
- Your reputation (drug users have low status, mostly).
- Not having unpleasant contacts with the criminal justice system.
- Viewing the world through your current set of perceptive and affective filters, rather than through a slightly different set of filters.
Because we can have preferences over our preferences. For instance, I would prefer it if I preferred to eat healthier foods because that preference would clash less with my desire to stay fit and maintain my health. There is nothing irrational about wishing for more consistent (and thus more achievable) preferences.
Arguing over definitions is pointless, and somewhat dangerous. If we define the word "rational" in some sort of site-specific way, we risk confusing outsiders who come here and who haven't read the prior threads.
Use the word "rational" or "rationality" whenever the difference between its possible senses does not matter. When the difference matters, just use more specific terminology.
General rule: When terms are confusing, it is better to use different terms than to have fights over meanings. Indeed, your impulse to fight for the word-you-want should be deeply suspect; wanting to affiliate our ideas with pleasant-sounding words is very similar to our desire to affiliate with high-status others; it makes us (or our ideas) appealing for reasons that are unrelated to the correctness or usefulness of what we are saying.
I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.
I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?
Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to "sterilize" the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.
Sorry -- I meant, but did not make clear, that the word "rationality" should be avoided only when the conversation involves the clash between "winning" and "truth seeking." Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of "winning" doesn't require believing false things, then you can generally expect to do better (on average) by believing true things than false things - certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works "generally," rather than what works well in specific subdomains? If rationality means whatever helps you win, than overconfidence will often be rational. (Examples: placebo effect, dating, job interviews, etc.) I think you need to either decide that your definition of rationality does not always require a preference for true beliefs, or else revise the definition.
It also might be worthwhile, for the sake of clarity, to just avoid the word "rationality" altogether in future conversations. It seems to be at risk of becoming an essentially contested concept, particularly because everyone wants to be able to claim that their own preferred cognitive procedures are "rational." Why not just talk about whether a particular cognitive ritual is "goal-optimizing" when we want to talk about Eliezer-rationality, while saving the term "truth-optimizing" (or some variant) for epistemic-rationality?
Pwno said: I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?
The classic example would invoke the placebo effect. Believing that medical care is likely to be successful can actually make it more successful; believing that it is likely to fail might vitiate the placebo effect. So, if you are taking a treatment with the goal of getting better, and that treatment is not very good (but it is the best available option), then it is better from a rationalist goal-seeking perspective to have an incorrectly high assessment of the treatment's possibility of success.
This generalizes more broadly to other areas of life where confidence is key. When dating, or going to a job interview, confidence can sometimes make the difference between success and failure. So it can pay, in such scenarios, to be wrong (so long as you are wrong in the right way).
It turns out that we are, in fact, generally optimized to make precisely this mistake. Far more people think they are above average in most domains than hold the opposite view. Likewise, people regularly place a high degree of trust in treatments with a very low probability of success, and we have many social mechanisms that try and encourage such behavior. It might be "irrational" under your usage to try and help these people form more accurate beliefs.