A Brief Rant On The Future Of Interaction Design [link]

post by Kevin · 2011-11-11T09:15:32.070Z · score: 1 (10 votes) · LW · GW · Legacy · 6 comments



Comments sorted by top scores.

comment by dbaupp · 2011-11-11T09:29:28.999Z · score: 4 (4 votes) · LW(p) · GW(p)

Is there any particular aspect of this that is most interesting/relevant to LW?

comment by RichardKennaway · 2011-11-11T11:41:04.162Z · score: 1 (1 votes) · LW(p) · GW(p)

Seconded. I was familiar with that web site already, and for anyone interested in interaction design its absolutely worth reading, but I don't see any specific rationality relevance.

comment by NancyLebovitz · 2011-11-11T18:55:57.986Z · score: 0 (0 votes) · LW(p) · GW(p)

Check on whether the usual way things are done might be leaving something important out?

comment by vi21maobk9vp · 2011-11-11T19:04:46.211Z · score: 0 (0 votes) · LW(p) · GW(p)

More than most of us would like to admit... This rant explains one of the things we must pay attention to if we want to do effective intelligence amplification. And intelligence amplififcation is a thing that moves you one step ahead - either in building AGI or in coping with our technical/scientific/conceptual level being insufficient.

comment by Raemon · 2011-11-11T15:04:38.919Z · score: 0 (0 votes) · LW(p) · GW(p)

I think it was approximately as relevant as AI should be (which is not very, technically, but it's inspiring and there are ample opportunities to tie in rationality lessons - in this case, how to figure out what your terminal goal actually should be and make long term plans around it).

Whether it was relevant, I was glad it was linked.

comment by NancyLebovitz · 2011-11-12T04:49:04.970Z · score: 1 (1 votes) · LW(p) · GW(p)

TED talk about brains having evolved to control movement, and I was planning to post it to this thread even before Bayes got mentioned.