Comment by johnd on What Is Signaling, Really? · 2012-07-10T13:20:49.976Z · score: 0 (0 votes) · LW · GW

On a slightly more constructive note, the game theoretic analysis of signals has also been used to analyse and suggest improvements to the use of forensic evidence in law. Roger Koppl's two articles "Epistemic Systems" and "Epistemics for Forensics" go into this in quite some detail, with the former laying out the mathematical framework and the later providing an experimental test of some of the hypotheses drawn from that framework.

I use this a little bit in my article on blinding and expert evidence, available here:

Comment by johnd on What Is Signaling, Really? · 2012-07-10T11:25:28.593Z · score: 3 (3 votes) · LW · GW

I'm curious, was the Art History comment a dig at Michael Lewis?

Comment by johnd on John Danaher on 'The Superintelligent Will' · 2012-04-04T10:27:31.967Z · score: 1 (1 votes) · LW · GW

I think that's an interesting point. I suppose I was thinking that nihilism, at least in the way its typically discussed, holds not that doing nothing is rational but, rather, that no goals are rational (a subtle difference, perhaps). This, in my opinion, might equate with all goals being equally possible. But, as you point out, if all goals are equally possible the agent might default to doing nothing.

One might put it like this: the agent would be landed in the equivalent of a Buridan's Ass dilemma. As far as I recall, the possibility that a CPU would be landed in such a dilemma was a genuine problem in the early days of computer science. I believe there was some protocol introduced to sidestep the problem.

Comment by johnd on John Danaher on 'The Superintelligent Will' · 2012-04-04T10:20:49.009Z · score: 3 (3 votes) · LW · GW

Well, I suppose I had in mind the fact that any cognitivist metaethics holds that moral propositions have truth values, i.e. are capable of being true or false. And if cognitivism is correct, then it would be possible for one's moral beliefs to be more or less accurate (i.e. to be more or less representative of the actual truth values of sets of moral propositions).

While moral cognitivism is most at home with moral realism - the view that moral facts are observer-indepedent - it is also compatible with some versions of anti-realism, such as the constructivist views I occasionally endorse.

The majority of moral philosophers (a biased sample) are cognitivists, as are most non-moral philosophers that I speak to (pure anecdotal evidence). If one is not a moral cognitivist, then the discussion on my blog post will of course be unpersuasive. But in that case, one might incline towards moral nihilism, which could, as I pointed out, provide some support for the orthogonality thesis.

Comment by johnd on Journal of Consciousness Studies issue on the Singularity · 2012-03-03T15:24:02.150Z · score: 6 (6 votes) · LW · GW

I can't speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you'd normally get for an unsolicited article in a top philosophy journal.

But, to reiterate, I can't say whether or not the Journal of Consciousness Studies did that in this instance.

Comment by johnd on New book from leading neuroscientist in support of cryonics and mind uploading · 2012-02-13T19:37:25.585Z · score: 1 (1 votes) · LW · GW

There's an interview with him on the Think Atheist Podcast this week. Here:

Comment by johnd on Another Real World Example of Cognitive Bias · 2012-02-01T15:31:20.716Z · score: 0 (0 votes) · LW · GW

I wrote something about the possibility of blind expertise (potentially used to overcome such biases) a while back.

You can find a preprint version here:

Comment by johnd on "Ray Kurzweil and Uploading: Just Say No!", Nick Agar · 2011-12-03T00:21:57.848Z · score: 3 (3 votes) · LW · GW

First of all, thanks for sharing from my blog posts. Second, and perhaps unsurprisingly, I disagree with Hauskeller's interpretation of Agar's argument as being "curiously techno-optimistic" because of its appeal to LEV. Agar isn't particularly optimistic about LEVs chances of success (as is shown by his comments in subsequent chapters of his book). He just thinks LEV is more likely than the combination of Strong AI and apparently successful mind-uploading.

Comment by johnd on Conceptual Analysis and Moral Theory · 2011-05-16T16:37:00.477Z · score: 0 (0 votes) · LW · GW

How could you endorse the first part without endorsing the second part? Doesn't the first part already include the second part?

After all, it says "within the range of hearing and of a level sufficiently strong to be heard". What could that mean if not "sufficient to generate the sensation stimulated in organs of hearing by such vibrations"?

Comment by johnd on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) · 2011-02-13T11:26:13.536Z · score: 3 (3 votes) · LW · GW

There's not much to critically engage with yet, but...

I find it odd that you claim to have "laid [your] positions on the table" in the first half of this piece. As far as I can make out, the first half only describes a set of problems and possibilities arising from the "intelligence explosion". It doesn't say anything about your response or proposed solution to those problems.

Comment by johnd on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-18T16:19:50.082Z · score: 1 (1 votes) · LW · GW

Ah yes, that seems to work. Thanks

Comment by johnd on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-18T09:40:52.287Z · score: 3 (5 votes) · LW · GW

Hi Luke,

Great post. Will be writing something about the legal uses of SPRs in the near future.

Anyway, the link to the Grove and Meehl study doesn't seem to work for me. It says the file is damaged and cannot be repaired.