Posts

Comments

Comment by JohnD on What Is Signaling, Really? · 2012-07-10T13:20:49.976Z · LW · GW

On a slightly more constructive note, the game theoretic analysis of signals has also been used to analyse and suggest improvements to the use of forensic evidence in law. Roger Koppl's two articles "Epistemic Systems" and "Epistemics for Forensics" go into this in quite some detail, with the former laying out the mathematical framework and the later providing an experimental test of some of the hypotheses drawn from that framework.

I use this a little bit in my article on blinding and expert evidence, available here:

http://keele.academia.edu/JohnDanaher/Papers/1193589/_Blind_Expertise_and_the_Problem_of_Scientific_Evidence_

Comment by JohnD on What Is Signaling, Really? · 2012-07-10T11:25:28.593Z · LW · GW

I'm curious, was the Art History comment a dig at Michael Lewis?

Comment by JohnD on John Danaher on 'The Superintelligent Will' · 2012-04-04T10:27:31.967Z · LW · GW

I think that's an interesting point. I suppose I was thinking that nihilism, at least in the way its typically discussed, holds not that doing nothing is rational but, rather, that no goals are rational (a subtle difference, perhaps). This, in my opinion, might equate with all goals being equally possible. But, as you point out, if all goals are equally possible the agent might default to doing nothing.

One might put it like this: the agent would be landed in the equivalent of a Buridan's Ass dilemma. As far as I recall, the possibility that a CPU would be landed in such a dilemma was a genuine problem in the early days of computer science. I believe there was some protocol introduced to sidestep the problem.

Comment by JohnD on John Danaher on 'The Superintelligent Will' · 2012-04-04T10:20:49.009Z · LW · GW

Well, I suppose I had in mind the fact that any cognitivist metaethics holds that moral propositions have truth values, i.e. are capable of being true or false. And if cognitivism is correct, then it would be possible for one's moral beliefs to be more or less accurate (i.e. to be more or less representative of the actual truth values of sets of moral propositions).

While moral cognitivism is most at home with moral realism - the view that moral facts are observer-indepedent - it is also compatible with some versions of anti-realism, such as the constructivist views I occasionally endorse.

The majority of moral philosophers (a biased sample) are cognitivists, as are most non-moral philosophers that I speak to (pure anecdotal evidence). If one is not a moral cognitivist, then the discussion on my blog post will of course be unpersuasive. But in that case, one might incline towards moral nihilism, which could, as I pointed out, provide some support for the orthogonality thesis.

Comment by JohnD on Journal of Consciousness Studies issue on the Singularity · 2012-03-03T15:24:02.150Z · LW · GW

I can't speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you'd normally get for an unsolicited article in a top philosophy journal.

But, to reiterate, I can't say whether or not the Journal of Consciousness Studies did that in this instance.

Comment by JohnD on New book from leading neuroscientist in support of cryonics and mind uploading · 2012-02-13T19:37:25.585Z · LW · GW

There's an interview with him on the Think Atheist Podcast this week. Here:

http://www.blogtalkradio.com/thinkatheist/2012/02/13/episode-45-dr-sebastian-seung-feb-12-2012

Comment by JohnD on Another Real World Example of Cognitive Bias · 2012-02-01T15:31:20.716Z · LW · GW

I wrote something about the possibility of blind expertise (potentially used to overcome such biases) a while back.

You can find a preprint version here:

https://sites.google.com/site/johndanaher84/papers-and-presentations

Comment by JohnD on "Ray Kurzweil and Uploading: Just Say No!", Nick Agar · 2011-12-03T00:21:57.848Z · LW · GW

First of all, thanks for sharing from my blog posts. Second, and perhaps unsurprisingly, I disagree with Hauskeller's interpretation of Agar's argument as being "curiously techno-optimistic" because of its appeal to LEV. Agar isn't particularly optimistic about LEVs chances of success (as is shown by his comments in subsequent chapters of his book). He just thinks LEV is more likely than the combination of Strong AI and apparently successful mind-uploading.

Comment by JohnD on Conceptual Analysis and Moral Theory · 2011-05-16T16:37:00.477Z · LW · GW

How could you endorse the first part without endorsing the second part? Doesn't the first part already include the second part?

After all, it says "within the range of hearing and of a level sufficiently strong to be heard". What could that mean if not "sufficient to generate the sensation stimulated in organs of hearing by such vibrations"?

Comment by JohnD on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) · 2011-02-13T11:26:13.536Z · LW · GW

There's not much to critically engage with yet, but...

I find it odd that you claim to have "laid [your] positions on the table" in the first half of this piece. As far as I can make out, the first half only describes a set of problems and possibilities arising from the "intelligence explosion". It doesn't say anything about your response or proposed solution to those problems.

Comment by JohnD on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-18T16:19:50.082Z · LW · GW

Ah yes, that seems to work. Thanks

Comment by JohnD on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-18T09:40:52.287Z · LW · GW

Hi Luke,

Great post. Will be writing something about the legal uses of SPRs in the near future.

Anyway, the link to the Grove and Meehl study doesn't seem to work for me. It says the file is damaged and cannot be repaired.