Posts
Comments
More generally, the words for the non-metric units are often much more convenient than the words for the metric ones. I think this effect is much stronger than any difference in convenience of the actual sizes of the units.
I think it's the main reason why many of the the non-metric units are still more popular for everyday use than the metric ones in the UK, even though we've all learned metric at school for the last forty years or so.
It's slightly disconcerting to imagine some of the writing coming from the pen of an Anglican deacon.
The useful advice is in the first 5000 words of the essay, most importantly in the examples of bad writing. The 100 words or so of 'rules' are just a summary at the end.
This kind of teaching is common in other subjects. For example, in a Go textbook it's not rare to see a chapter containing a number of examples and a purported 'rule' to cover them, where the rule as stated is broken all the time in professional play. It would be a mistake to conclude that the author isn't a strong player, or that the chapter doesn't contain helpful advice. The 'rule' is just a way to describe a group of related examples.
I think it's better to think of the 'rules' in Orwell's essay more like mnemonics for what he's said earlier, rather than instructions to be followed on their own.
I don't find it off-putting, but it does make me feel I'm reading Lewis Carrol.
Priors don't come into it. The expert was presenting likelihood ratios directly (though in an obscure form of words).
That isn't what was going on in this case. The expert wasn't presenting statistics to the jury (apparently that's already forbidden).
The good news from this case (well, it's news to me) is that the UK forensic science service both understands the statistics and has sensible written procedures for using them, which some of the examiners follow. But they then have to turn the likelihood ratio into a rather unhelpful form of words like 'moderately strong scientific support' (not to be confused with 'moderate scientific support', which is weaker), because bringing the likelihood ratios into court is forbidden.
(Bayes' Theorem itself doesn't really come into this case.)
This isn't quite "a judge has ruled that [Bayes' theorem] can no longer be used", but I don't think it's good.
The judges decided that using a formula to calculate likelihood isn't allowed in cases where the numbers plugged into the formula are themselves uncertain (paragraph 86), and using conservative figures apparently doesn't help.
Paragraph 90 says that it's already established law that Bayes' theorem and likelihood ratios "should not be used", but I think it means "shouldn't be talked about in front of the jury".
Paragraph 91 says explicitly that the court wasn't deciding how (or whether) Bayes' Theorem and likelihood ratios can be used in cases where the numbers plugged into the formula aren't themselves very uncertain.
In paragraph 95, the judges decide that (when matching footprints) it's OK for an expert to stare at the data, come up with a feeling about the strength of the evidence, and express that in words, while it's not OK for the same expert to do a pencil-and-paper calculation and present the result in similar words.
I think part of the point is that when the expert is cross-examined, the jury will react differently if she says "this evidence is strong because I've got lots of experience and it feels strong to me", rather than "this evidence is strong because I looked up all the frequencies and did the appropriate calculation".
I do get the impression that the approach of multiplying likelihood ratios is being treated as a controversial scientific process (as if it were, say, a chemical process that purported to detect blood), and one which is already frowned upon. Eg paras 46, 108 iii).
Thanks for the link.
I think paragraphs 80 to 86 are the key paragraphs.
They're declaring that using a formula isn't allowed in cases where the numbers plugged into the formula are themselves uncertain.
But in this case, where there was uncertainty in the underlying data the expert tried to take a conservative figure. The judges don't seem to think that helps, but they don't say why. In particular, para 108 iv) seems rather wrongheaded for this reason.
(It looks like one of the main reasons they overturned the original judgement was that the arguments in court ended up leaving the jury hearing less conservative estimates of the underlying figures than the ones the expert used (paras 103 and 108). That seems like a poor advertisement for the practice of keeping explicit calculations away from the jury.)
Well, I'm in the UK, and there's no law against using IQ-style tests for job applicants here. Is that really the case in the US? (I assume the "You're a terrorist" bit was hyperbole.)
Employers here still often ask for apparently-irrelevant degrees. But admission to university here isn't noticeably based on 'generic' tests like the SAT; it's mostly done on the grades from subject-specific exams. So I doubt employers are treating the degrees as a proxy for SAT-style testing.
In your third speculation, I think the first and second category have got swapped round.
I don't think there's much need for heuristics like "rate of effectiveness change times donation must be much smaller - say, a few percent of - effectiveness itself."
If you're really using a Landsburg-style calculation to decide where to donate to, you've already estimated the effectiveness of the second-most effective charity, so you can just say that effectiveness drop must be no greater than the corresponding difference.
If it's a belief you've previously thought of as obvious and left unexamined, then this is probably a useful heuristic. Otherwise, no.
You say 'the world', but it seems to me you're talking about a region which is a little smaller.
Yes, several of these models look like they're likely to run into trouble of the Goodhart's law type ("Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes").
Principal component analysis of UK political views, from a few years back: http://politicalsurvey2005.com/themap.pdf
I think trolley problems suffer from a different type of oversimplification.
Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why 'you' ended up being in the situation where you get to control the direction of the trolley.
In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.
(Or if you have a formulation which explicitly mentions the 'mad philosopher' and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)
I wonder whether the 'right answers' are what the subject of the photograph was actually feeling, what an expert intended the photograph to represent, or what most people respond.
I think it's quite normal that if someone is acknowledged by their peers to be among the very best at what they do, they won't waste much time with status games.
There's an exception if doing what they do requires publicity to bring in sales or votes.
It would become a mind game: you'd have to explicitly model how you think Omega is making the decision.
The problem you're facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the 'all your behaviour' part, because Omega is always right. But in the 'imperfect Omega' case you can't.
To get to the conclusion that against a 60% Omega you're better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.
I think that's really the original problem in disguise (it's a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.
So Nabarro explicitly says that he's talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.
A simple fix would be to not bother publishing a top contributors list.
It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we're supposed to take it as conscious.
For me, Go helped to highlight certain temptations to behave irrationally, which I think can carry over to real life.
One was the temptation to avoid thinking about parts of the board where I'd recently made a mistake.
And if I played a poor move and my opponent immediately refuted it, there was a temptation to try to avoid seeming foolish by dreaming up some unlikely scheme I might have had which would have made the exchange part of the plan.
Fair enough. I should have said "there are ideas which are useful heuristics in Go, but not in real life", rather than talking about "sound reasoning".
The "I'm committed now" one can be a genuinely useful heuristic in Go (though it's better if you're using it in the form "if I do this I will be committed", rather than "oh dear, I've just noticed I'm committed"). "Spent so much effort" is in the sense of "given away so much", rather than "taken so many moves trying".
It also teaches "if you're behind, try to rock the boat", which probably isn't great life advice.
You can think of "don't play aji-keshi" as saying "leave actions which will close down your future options as late as possible", which I think can be a useful lesson for real life (though of course the tricky part is working out how late 'as possible' is).
The first is certainly valid reasoning in Go, and I phrased it in a way that should make that obvious. But you can also phrase it as "I've spent so much effort trying to reach goal X that I'm committed now", which is almost never sound in real life.
For the second, I'm not thinking so much of tewari as a fairly common kind of comment in professional game commentaries. I think there's an implicit "and I surely haven't made a mistake as disastrous as a two point loss" in there.
It's probably still not sound reasoning, but for most players the best strategy for finding good moves relies more on 'feel' and a bag of heuristics than on reasoning. I'm not sure I'd count that as a way that Go differs from real life, though.
Seven stones is a large handicap. Perhaps they're better than the average club player in English-speaking countries, but I think the average Korean club player is stronger than Zen.
On the other hand, there are some ways of thinking which are useful for Go but not for real life. One example is that damaging my opponent is as good as working for myself.
Another example is that, between equal players, the sunk costs fallacy is sometimes sound reasoning in Go. One form is "if I don't achieve goal X I've lost the game anyway, so I might as well continue trying even though it's looking unlikely". Another form (for stronger players than me) is "if I play A, I will get a result that's two points worse than I could have had if I played B earlier, so I can rule A out."
Do you have a reference for the 'discover that the previous version was incomplete' part?
I'm not sure that's a safe assumption (and if they were told, the article really should say so!)
If you did the experiment that way, you wouldn't know whether or not the effect was just due to deficient arithmetic.
One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.
These two framings aren't saying the same thing at all. The proposed policy might be the same in both cases, but the information available to the two groups about its effects is different.
Further, it's plausible that if you had a 'budget' of N prison places and M police officers for drink-driving deterrence, the most effective way to deploy it would be to arrange for a highish probability of an offender getting a short prison sentence, plus a low probability of getting a long sentence (because we know that a high probability of being caught has a large deterrent effect, and also that people overestimate the significance of a small chance of 'winning the lottery').
So the 'high sentence only if you kill' policy might turn out to be an efficient one (I don't suppose the people who set sentencing policy are really thinking along this lines, though).
The article is saying that you can't affect your sentence by showing skill at drunk driving, other than by using the (very indirect) evidence provided by showing that nobody died as a result.
I think it's a sound point, given that the question is about identical behaviour giving different sentences.
If you're told that two people have once driven over the limit, that A killed someone while B didn't, and nothing more, what's your level of credence that B is the more skilled drunk driver?
I think Hofstadter could fairly be described as an AI theorist.
When writing on the internet, it is best to describe children's ages using years, not their position in your local education system.