For those in the Stanford AI-class

post by Dr_Manhattan · 2011-12-10T22:54:38.294Z · LW · GW · Legacy · 9 comments

Contents

9 comments

I posted an obligatory FAI question for the Thrun/Norvig office hours. You can vote here https://www.ai-class.com/sforum/?modSeries=19a660&modTopic=45&modSubmission=5cabd7. Only top questions will be answered :)

ETA: it's now the top question in the category by a nice margin!

9 comments

Comments sorted by top scores.

comment by XiXiDu · 2011-12-11T12:14:57.128Z · LW(p) · GW(p)

I wrote Norvig about friendly AI issues in June but he never answered my email. I just wrote Sebastian Thrun. So if this question doesn't make it through, maybe my email will.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-12-11T16:24:25.888Z · LW(p) · GW(p)

It already got the most votes in the Misc category. Will be hard to ignore.

comment by Curiouskid · 2011-12-11T02:12:49.134Z · LW(p) · GW(p)

Are there transcripts for the Stanford videos?

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-12-11T02:19:54.009Z · LW(p) · GW(p)

The Q and A are on YouTube. Not sure if there are textual transcripts. For the class itself, there is this http://www.wonderwhy-er.com/ai-class/ .

comment by lukeprog · 2011-12-10T23:00:08.374Z · LW(p) · GW(p)

Thanks. Everyone please go vote for this. I just did.

comment by Suryc11 · 2011-12-12T02:38:20.026Z · LW(p) · GW(p)

One response (unnamed): What can be done is that you can become involved yourself. I am aware of a major open source project that will be announced shortly that will have a major ethical component (I don't use the term "Friendly AI" because SIAI's proposal is shortsighted, unwise, and unethical). Contact me via my blog at http://becominggaia.wordpress.­com/papers/ I (it) can point you to a number of conferences and resources as well.

Shortsighted, unwise, and unethical?

Anyone heard of this blog before?

Replies from: DanPeverley, Nick_Tarleton
comment by DanPeverley · 2011-12-12T04:24:50.179Z · LW(p) · GW(p)

"Becominggaia" as a website name set off my alarms, and it only got worse when I read one of his papers. The author appears to believe that greater intelligence is necessarily equated with greater benevolence, thus limiting any need for fear of existential risk. I skimmed though, and I admit I was a bit biased from the onset by the use of Kant's philosophy, so draw your own conclusions.

Replies from: lukstafi
comment by lukstafi · 2011-12-25T22:59:36.592Z · LW(p) · GW(p)

[...] so draw your own conclusions.

E.g. from http://vimeo.com/33767396 (slides http://bicasociety.org/2011/materials/Waser.pptx), on occasion you prefer listening to reading.

comment by Nick_Tarleton · 2011-12-12T08:33:20.213Z · LW(p) · GW(p)

Anyone heard of this blog before?

No, but it doesn't seem worth paying attention to.