Audio from Eliezer's talk at the Oxford Transhumanists

post by Alex Flint (alexflint) · 2011-03-29T21:31:35.562Z · LW · GW · Legacy · 4 comments

In January we hosted Eliezer at an Oxford Transhumanists meeting. He spoke about why AI is such an incredibly consequential consideration, over and above other technologies. This will not be new material for regular lesswrong readers. The recordings from Eliezer's talk, along with previous talks, are available at http://groupspaces.com/oxfordtranshumanists/pages/past-talks.

4 comments

Comments sorted by top scores.

comment by lukeprog · 2011-03-30T05:50:08.790Z · LW(p) · GW(p)

At 5:56, Eliezer says the mind projection fallacy is known to philosophy students as "humane projectivism." I've never heard of that... perhaps "Humean projectivism" was intended? Hume's famous quote on this is:

Tis a common observation, that the mind has a great propensity to spread itself on external objects, and to conjoin with them any internal impressions, which they occasion, and which always make their appearance at the same time that these objects discover themselves to the senses.

Replies from: Vladimir_Nesov, Eliezer_Yudkowsky
comment by Vladimir_Nesov · 2011-03-30T10:54:46.492Z · LW(p) · GW(p)

Humean.

comment by Normal_Anomaly · 2011-04-02T02:04:26.536Z · LW(p) · GW(p)

Aubrey De Grey's talk is good too. Unfortunately he makes a lot of use of a powerpoint I can't see.