[LINK]: Interview with Daniel Kahneman

post by bradm · 2011-11-29T20:02:55.119Z · LW · GW · Legacy · 2 comments

Contents

2 comments

Here is a Q & A with Daniel Kahneman.  He gives a brief answer to a question about heuristics and AI:

 

Q.With the launch of Siri and a stated aim to be using the data collected to improve the performance of its AI, should we expect these types of quasi-intelligences to develop the same behavioral foibles that we exhibit, or should we expect something completely different? And if something different, would that something be more likely to reflect the old “rational” assumptions of behavior, or some totally other emergent set of biases and quirks based on its own underlying architecture? My money’s on emergent weirdness, but then, I don’t have a Nobel Prize.-Peter Bennett

A.Emergent weirdness is a good bet. Only deduction is certain. Whenever an inductive short-cut is applied, you can search for cases in which it will fail. It is always useful to ask “What relevant factors are not considered?” and “What irrelevant factors affect the conclusions?” By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.

2 comments

Comments sorted by top scores.

comment by Filipe · 2011-12-02T00:34:33.668Z · LW(p) · GW(p)

What about this:

"Q. So of course there’s been a whole slew of research showing that we are quite irrational and prone to errors in our thinking. Has there been research to help us be more rational?-T

A. Yes, of course, many have tried. I don’t believe that self-help is likely to succeed, though it is a pretty good idea to slow down when the stakes are high. (And even the value of that advice has been questioned.) Improving decision-making is more likely to work in organizations (together with Olivier Sibony and Dan Lovallo, I published an attempt in that direction in the Harvard Business Review in June 2011.)"

comment by djcb · 2011-11-30T07:28:45.614Z · LW(p) · GW(p)

Ah, that was interesting, thanks!

I am reading Kahneman's recent Thinking fast and slow (TFaS) right now, and I am thoroughly enjoying it. As much as I enjoyed reading Ariely's books or The invisible Gorilla and a bunch of similar books., TFaS tackles the same questions in a much more thorough, more fundamental way - without ever becoming dry or 'academic'.

IMHO a must-read for LW-readers, I propose adding TFaS to the Sequences :-)