When To Stop
post by Alok Singh (OldManNick) · 2023-02-09T09:10:05.978Z · LW · GW · 5 commentsContents
5 comments
(Old notes from 2016 that I stumbled upon).
- Train on everything, finetune if needed. Big enough and tuning seems pointless too. It’s not like humans need much tuning for everything. Why do we train and then throw away the network each time to start anew?
- Why have separate modalities? That’s really dumb. Info is info, just give it all at once.
- Cloze deletion should be network’s default task
- Why bother cleaning data so much just dump more and more of it like those pots of neverending soup
- I wonder if all this will finally teach us complex systems
- If it picks up language, we could just try talking to it
- I bet people will say it’s ”not real intelligence”. whatever bro, it’s more coherent than you
- All these architectures don’t seem to matter that much
- If it ever learns reasoning I feel like we’re screwed
- normalization seems really important but batch norm is weird
- statistical learning theory doesn’t seem helpful
- why can a network be compressed so much?
- is parity the hardest function to learn
5 comments
Comments sorted by top scores.
comment by janus · 2023-02-09T10:57:01.292Z · LW(p) · GW(p)
Heh. Prescient.
I've added an excerpt from this to generative.ink/prophecies.
Replies from: OldManNick↑ comment by Alok Singh (OldManNick) · 2023-02-09T18:19:51.665Z · LW(p) · GW(p)
I still wonder about the parity prediction these days. I feel like there's something there
Replies from: jsd↑ comment by jsd · 2023-03-11T07:43:21.486Z · LW(p) · GW(p)
You may enjoy: https://arxiv.org/abs/2207.08799
comment by DragonGod · 2023-02-10T11:55:24.071Z · LW(p) · GW(p)
statistical learning theory doesn’t seem helpful
Could you explain this?
Replies from: OldManNick↑ comment by Alok Singh (OldManNick) · 2023-05-23T19:48:00.007Z · LW(p) · GW(p)
that the functional analysis is mildly helpful for understanding the problem, but the focus of the field doesn't seem to be on anything helpful. VC dimension is the usual thing to poke fun at, but a lot of the work on regularization is also meh