Posts

Comments

Comment by MarkHHerman on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-18T02:56:58.865Z · LW · GW

Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?

[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]

Comment by MarkHHerman on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-18T01:04:20.380Z · LW · GW

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

Comment by MarkHHerman on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-15T23:31:27.462Z · LW · GW

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “ well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

Comment by MarkHHerman on With whom shall I diavlog? · 2009-06-03T16:54:22.468Z · LW · GW

Someone with whom establishing a connection might make the difference in being able to get them to appear at a future Singularity Summit. Also, someone with whom an association enhances your credibility.