What were the biggest discoveries / innovations in AI and ML?
post by Eli Tyre (elityre) · 2020-01-06T07:42:11.048Z · LW · GW · No commentsThis is a question post.
Contents
Answers 25 jacobjacob 6 FactorialCode 4 elityre 3 bgold 3 elityre 2 Björn Tegelund 2 elityre 2 elityre 2 elityre 2 elityre None No comments
These could be theoretical breakthroughs (like "the idea of a perceptron" or [something Judea Pearl did]), or they could be watershed developments / new applications that don't necessarily involve much new theory (like AlexNet or AlphaGo). Anything that seems like an important development in AI, is fair game.
I want an independently-generated list of all the interesting developments of AI, over the whole life of the fields, for a research project that I'm working on.
Feel free to include ones that you, personally, think were a big deal in some way, even if most people don't think so.
Thanks!
Answers
Check the section called "derivations" here: it links to a document attempting to list all conceptual breakthroughs in AI, of at least a certain significance, ever http://mediangroup.org/insights with related discussion on forecasting implications here: https://ai.metaculus.com/questions/2920/will-the-growth-rate-of-conceptual-ai-insights-remain-linear-in-2019/
↑ comment by Eli Tyre (elityre) · 2020-01-06T09:28:49.061Z · LW(p) · GW(p)
Ooo. Thank you!
-Using GPUs for computing.
These are the two big ones. Yes there are some others, but those two ideas together are the backbone of the current AI and ML boom.
AlexNet in 2012. I'm not super clear on the details, but it seems to be the first time a deep neural net substantially outperformed other AI methods, and thereby kicked off the deep learning revolution.
NeurIPS best paper awards will likely contain good leads.
Deep Blue: a chess engine beats the reigning world chess champion, Gary Kasparov.
From A 'Brief' History of Neural Nets and Deep Learning, Part 4:
So, why indeed, did purely supervised learning with backpropagation not work well in the past? Geoffrey Hinton summarized the findings up to today in these four points:
- Our labeled datasets were thousands of times too small.
- Our computers were millions of times too slow.
- We initialized the weights in a stupid way.
- We used the wrong type of non-linearity.
I think this blog series might help provide a partial answer to your question.
Frank Rosenblatt develops the perceptron algorithm.
No comments
Comments sorted by top scores.