Audio from Eliezer's talk at the Oxford Transhumanists

post by alexflint · 2011-03-29T21:31:35.562Z · score: 8 (9 votes) · LW · GW · Legacy · 4 comments

In January we hosted Eliezer at an Oxford Transhumanists meeting. He spoke about why AI is such an incredibly consequential consideration, over and above other technologies. This will not be new material for regular lesswrong readers. The recordings from Eliezer's talk, along with previous talks, are available at


Comments sorted by top scores.

comment by lukeprog · 2011-03-30T05:50:08.790Z · score: 5 (5 votes) · LW · GW

At 5:56, Eliezer says the mind projection fallacy is known to philosophy students as "humane projectivism." I've never heard of that... perhaps "Humean projectivism" was intended? Hume's famous quote on this is:

Tis a common observation, that the mind has a great propensity to spread itself on external objects, and to conjoin with them any internal impressions, which they occasion, and which always make their appearance at the same time that these objects discover themselves to the senses.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-30T22:54:25.462Z · score: 2 (2 votes) · LW · GW

Humean, yep.

comment by Vladimir_Nesov · 2011-03-30T10:54:46.492Z · score: 2 (2 votes) · LW · GW


comment by Normal_Anomaly · 2011-04-02T02:04:26.536Z · score: 0 (0 votes) · LW · GW

Aubrey De Grey's talk is good too. Unfortunately he makes a lot of use of a powerpoint I can't see.