LDL 8: A Silly Realization

post by magfrump · 2017-12-06T02:20:44.216Z · LW · GW · 1 comments

Contents

1 comment

Maybe deep learning researchers are just like mathematicians. They think of coding as the hard part of what they do so they spend a lot of time teaching people how to implement neural nets from scratch. But actually the hard part of what they do is math (or at least keeping track of tensor shapes).

So I'm frustrated that I've heard a dozen explanations of gradient descent, which I understood in high school, but I have to learn about variational autoencoders from scratch by reading academic papers.

1 comments

Comments sorted by top scores.

comment by mraxilus · 2017-12-06T09:12:11.991Z · LW(p) · GW(p)

I would agree, a lot of DL concepts became much clearer after I worked with tensors enough to be able to manipulate them in my mind. Before that I just sort of just made sure the shapes matched up and hoped for the best. Also, you tend to see a lot of beginner questions/confusion on DL libraries (Keras especially) about tensor shape mismatches, suggesting that it is quite a common problem area.