Posts
Comments
I've never heard of cyclic peptides. But I suppose it is possible.
Actually the HPLC study we cite talks about a similar possibility - they say the observed effects may be due to "cryptic peptides", perhaps created by the lysing of larger proteins and peptides. It just seems very unlikely to me. To me, based on my metascience research, it's a it easier to believe that the published literature is just plain wrong. Check out the HPLC paper - it's he only we found doing that sort of analysis but it's very detailed (https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/abs/10.1002/dta.1817).
In the case of Bryan Johnson he injected it in his glute. Others inject in the thigh (there's a how-to video about this on YouTube). Apparently some biohackers online think that by injecting in the shoulder more will reach the brain, because the shoulder is closer to the head, according to a podcast I listened to LOL (the podcaster agreed this made a lot of sense).
Uh this is kinda embarrassing but I just didn't really read it at all honestly..
Q: when was prion disease discovered
A: "Prion disease was first recognized as a distinct category of neurodegenerative disorders in the 1980s, though the groundwork for its discovery was laid earlier.
Key milestones in the discovery of prion diseases include:
- 1920s: Creutzfeldt-Jakob Disease (CJD) was first described by Hans Gerhard Creutzfeldt and Alfons Maria Jakob.
- 1950s: Carleton Gajdusek began studying kuru, a disease affecting the Fore people of Papua New Guinea.
- 1960s: Gajdusek demonstrated that kuru could be transmitted to chimpanzees, suggesting an infectious etiology.
- 1982: Stanley Prusiner coined the term "prion" (for proteinaceous infectious particle) and proposed the protein-only hypothesis for these infectious agents.
- 1997: Prusiner was awarded the Nobel Prize in Physiology or Medicine for his work on prions.
The concept of prions as the causative agents of these diseases was revolutionary because it challenged the central dogma of biology, suggesting that proteins alone, without nucleic acids, could be infectious.
Would you like me to elaborate on any specific aspect of prion disease discovery?"
You have an eagle eye Carl!... I actually asked Claude (the AI) about this when I was writing it, but I was very sloppy and didn't read Claude's answer carefully. Just fixed it. I should be more careful..
Great, I look forward to meeting you there!
hah... actually not a bad idea... too late now. BTW the recording will be available eventually if you're interested.
Hi, organizer here. I just saw your message now right after the event. There were a couple people from Microsoft there but I'm not sure if they were interested in alignment research. This was mostly a general audience at this event, mostly coming through the website AIcamp.ai. We also had some people from the local ACX meetup and transhumanist meetup. PS: I sent you an invitation to connect on LinkedIN, let's stay in touch (I'm https://www.linkedin.com/in/danielelton/).
Unfortunately they have a policy that they check ID at the door and only allow those over 21 in. I'm going to update the post now to make this clear. Even when the outdoor patio is open it's still only 21+.
The way I would describe it now is there's a large bar in the main room, and then there's a side room (which is also quite large) with a place that serves Venezeualian Food (very good), and Somerville Chocolate (they make and sell chocolate there).
The age restriction has never been a problem in the past although I do vaguely recall someone mentioning it once. I'm going to try to have future meetups I run at a public library (probably Cambridge Public Library), its just tricky getting the room reservations sometimes. We have been thinking of trying out the food court in Cambrideside mall, also, although the tables there are rather small and I don't think they can be moved and joined together (from what I remember).
Sorry for the late reply. In the future we will try to have a Zoom option for big events like this.
We did manage to record it, but the audio isn't great (and we didn't cover the Q&A)
This is pretty interesting.. any outcome you can share? (I'll bug you about this next time I see you in person so you can just tell me then rather than responding, if you'd like)
Good idea to just use the time you fall asleep rather than the sleep stage tracking, which isn't very accurate. I think the most interesting metric is just boring old total sleep time (unfortunately sleep trackers in my experience are really bad at actually capturing sleep quality.. but I suppose if there's a sleep quality score you have found useful that might be interesting to look at also). Something else I've noticed is that by looking at the heart rate you can often get a more accurate idea of when you feel asleep and woke up.
I would modify the theory slightly by noting that the brain may become hypersensitive to sensations arising from the area that was originally damaged, even after it has healed. Sensations that are otherwise normal can then trigger pain. I went to the website about pain reprocessing therapy and stumbled upon an interview with Alan Gordon where he talked about this. I suspect that high level beliefs about tissue damage etc play a role here also in causing the brain to become hyper focused on sensations coming from a particular region and to interpret them as painful.
Something else that comes to mind here is the rubber hand illusion. Watch this video - and look at the flinches! Interesting, eh?
edit: (ok, the rubber hand illusion isn't clearly related, but it's interesting!)
That's really cool, thanks for sharing!
Since nobody else posted these:
Bay Area is Sat Dec 17th (Eventbrite) (Facebook)
South Florida (about an hour north of Miami) is Sat Dec 17th (Eventbrite) (Facebook)
On current hardware, sure.
It does look like scaling will hit a wall soon if hardware doesn't improve, see this paper: https://arxiv.org/abs/2007.05558
But Gwern has responded to this paper pointing out several flaws... (having trouble finding his response right now..ugh)
However, we have lots of reasons to think Moore's law will continue ... in particular future AI will be on custom ASICs / TPUs / neuromorphic chips, which is a very different story. I wrote about this long ago, in 2015. Such chips, especially asynchronous and analog ones, can be vastly more energy efficient.
I disagree, in fact I actually think you can argue this development points the opposite direction, when you look at what they had to do to achieve it and the architecture they use.
I suggest you read Ernest Davis' overview of Cicero. Cicero is a special-purpose system that took enormous work to produce -- a team of multiple people labored on it for three years. They had to assemble a massive dataset from 125,300 online human games. They also had to get expert annotations on thousands of preliminary outputs. Even that was not enough.. they had to generate synthetic datasets as well to fix issues with the system! Even then, the dialogue module required a specialized filter to remove nonsense. This is a break from the scaling idea that says to solve new problems you just need to scale existing architectures to more parameters (and train on a large enough dataset).
Additionally, they argue that this system appears very unlikely to generalize to other problems, or even to slight modifications of the game of Diplomacy. It's not even clear how well it would generalize to non-blitz games. If the rules were modified slightly, the entire system would likely have to be retrained.
I also want to point out that scientific research is not easy as you make it sound. Professors spend the bulk of their time writing proposals, so perhaps AI could help there by summarizing existing literature. Note though a typical paper, even a low-value one, generally takes a graduate student with specialized training about a year to complete, assuming the experimental apparatus and other necessary infrastructure are all in place. Not all science is data-driven either, science can also be observation-driven or theory-driven.
I've looked into these methods a lot, in 2020 (I'm not so much up to date on the latest literature). I wrote a review in my 2020 paper, "Self-explaining AI as an alternative to interpretable AI".
There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the "sanity checks" paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It's kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have seen no good arguments as to why they are better than just occlusion mapping. I think this is a symptom of people optimizing for paper publishing and trying to impress reviewers with novelty and math rather than actually building stuff that is useful.
You may find this interesting: "Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization". What they show is that a very simple model-agnostic technique (finding the image that maximizes an output) allows people to make better predictions about how a CNN will behave than Olah's activation maximization method, which produces images that can be hard to understand. This is exactly the sort of empirical testing I suggested in my Less Wrong post from Nov last year.
The comparison isn't super fair because Olah's techniques were designed for detailed mechanistic understanding, not allowing users to quickly be able to predict CNN behaviour. But it does show that simple techniques can have utility for helping users understand at a high level how an AI works.
There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.
In brief, a few issues I see here:
- You haven't actually provided any evidence that GPT does simulation other than "Just saying “this AI is a simulator” naturalizes many of the counterintuitive properties of GPT which don’t usually become apparent to people until they’ve had a lot of hands-on experience with generating text." What counterintuitve properties, exactly? Examples I've seen show GPT-3 is not simulating the environment being described in the text. I've seen a lot impressive examples too, but I find it hard to draw conclusions on how the model works by just reading lots and lots of outputs... I wonder what experiments could be done to test your idea that it's running a simulation.
- Even for very simple to simulate processes such as addition or symbol substitution, GPT has, in my view, trouble learning them, even though it does Grok those things eventually. For things like multiplication, the accuracy it has depends on how often the numbers appear in the training data (https://arxiv.org/abs/2202.07206), which is a bit telling, I think.
- Simulating the laws of physics is really hard.. trust me on this (I did a Ph.D. in molecular dynamics simulation). If it's doing any simulation at all, it's got to be some high level heuristic type stuff. If it's really good, it might be capable of simulating basic geometric constraints (although IIRC GPT is superb at spatial reasoning). Even humans are really bad at properly simulating physics accurately (researchers found that most people do really poorly on a test of basic physics based reasoning, like basic kinematics (will this ball curve left, right , or go straight, etc)). I imagine gradient descent is going to be much more likely to settle on shortcut rules and heuristics rather than implementing a complex simulation.
Peperine (black pepper extract) can help make quercetin more bioavailable. They are co-administered in many studies on the neuroprotective effects of quercetin: https://scholar.google.com/scholar?hl=en&as_sdt=0,22&q=piperine+quercetin
I find slower take-off scenarios more plausible. I like the general thrust of Christiano's "What failure looks like". I wonder if anyone has written up a more narrative / concrete account of that sort of scenario.
The thing you are trying to study ("returns on cognitive reinvestment") is probably one of the hardest things in the world to understand scientifically. It requires understanding both the capabilities of specific self-modifying agents and the complexity of the world. It depends what problem you are focusing on too -- the shape of the curve may be very different for chess vs something like curing disease. Why? Because chess I can simulate on a computer, so throwing more compute at it leads to some returns. I can't simulate human biology in a computer - we have to actually have people in labs doing complicated experiments just to understand one tiny bit of human biology.. so having more compute / cognitive power in any given agent isn't necessarily going to speed things along.. you also need a way of manipulating things in labs (either humans or robots doing lots of experiments). Maybe in the future an AI could read massive numbers of scientific papers and synthesize them into new insights, but precisely what sort of "cognitive engine" is required to do that is also very controversial (could GPT-N do it?).
Are you familiar with the debate about Bloom et al and whether ideas are getting harder to find? (https://guzey.com/economics/bloom/ , https://www.cold-takes.com/why-it-matters-if-ideas-get-harder-to-find/). That's relevant to predicting take-off.
The other post I always point people too is this one by Chollet.
I don't necessarily agree with it but I found it stimulating and helpful for understanding some of the complexities here.
So basically, this is a really complex thing.. throwing some definitions and math at it isn't going to be very useful, I'm sorry to say. Throwing math and definitions at stuff is easy. Modeling data by fitting functions is easy. Neither is very useful in terms of actually being able to predict in novel situations (ie extrapolation / generalization), which is what we need to predict AI take-off dynamics. Actually understanding things mechanistically and coming up with explanatory theories that can withstand criticism and repeated experimental tests is very hard. That's why typically people break hard questions/problems down into easier sub-questions/problems.
How familiar are you with Chollet's paper "On the Measure of Intelligence"? He disagrees a bit with the idea of "AGI" but if you operationalize it as "skill acquisition efficiency at the level of a human" then he has a test called ARC which purports to measure when AI has achieved human-like generality.
This seems to be a good direction, in my opinion. There is an ARC challenge on Kaggle and so far AI is far below the human level. On the other hand, "being good at a lot of different things", ie task performance across one or many tasks, is obviously very important to understand and Chollet's definition is independent from that.
Thanks, it's been fixed!!
Interesting, thanks. 10x reduction in cost every 4 years is roughly twice what I would have expected. But it sounds quite plausible especially considering AI accelerators and ASICs.
Thanks for sharing! That's a pretty sophisticated modeling function but it makes sense. I personally think Moore's law (the FLOPS/$ version) will continue, but I know there's a lot of skepticism about that.
Could you make another graph like Fig 4 but showing projected cost, using Moore's law to estimate cost? The cost is going to be a lot, right?
Networks with loops are much harder to train.. that was one of the motivations for going to transformers instead of RNNs. But yeah, sure, I agree. My objection is more that posts like this are so high level I have trouble following the argument, if that makes sense. The argument seems roughly plausible but not making contact with any real object level stuff makes it a lot weaker, at least to me. The argument seems to rely on "emergence of self-awareness / discovery of malevolence/deception during SGD" being likely which is unjustified in my view. I'm not saying the argument is wrong, more that I personally don't find it very convincing.
Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the real world" and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed - even from top firms - for instance Microsoft's Tay and Google Health's system for detecting diabetic retinopathy. Another example is Google's Duplex. And for how long have we heard about burger flipping robots taking people's jobs?
There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There's also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I'm pretty skeptical about that.
I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.
Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can't see that far in the future, since it depends on discoveries and scientific knowledge we don't have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).
I'm interested!
This is a shot in the dark, but I recall there was a blog post that made basically the same point visually, I believe using Gaussian distributions. I think the number they argued you should aim for was 3-4 instead of 6. Anyone know what I'm talking about?
Hi, I just wanted to say thanks for the comment / feedback. Yeah, I probably should have separated out the analysis of Grokking from the analysis of emergent behaviour during scaling. They are potentially related - at least for many tasks it seems Grokking becomes more likely as the model gets bigger. I'm guilty of actually conflating the two phenomena in some of my thinking, admittedly.
Your point about "fragile metrics" being more likely to show Grokking great. I had a similar thought, too.
I think a bit too much mindshare is being spent on these sci-fi scenario discussions, although they are fun.
Honestly I have trouble following these arguments about deception evolving in RL. In particular I can't quite wrap my head around how the agent ends up optimizing for something else (not a proxy objective, but a possibly totally orthogonal objective like "please my human masters so I can later do X"). In any case, it seems self awareness is required for the type of deception that you're envisioning. Which brings up an interesting question - can a purely feed-forward network develop self-awareness during training? I don't know about you, but I have trouble picturing it happening unless there is some sort of loop involved.
Zac says "Yes, over the course of training AlphaZero learns many concepts (and develops behaviours) which have clear correspondence with human concepts."
What's the evidence for this? If AlphaZero worked by learning concepts in a sort of step-wise manner, then we should expect jumps in performance when it comes to certain types of puzzles, right? I would guess that a beginning human would exhibit jumps from learning concepts like "control the center" or "castle early, not later".. for instance the principle "control the center", once followed, has implications on how to place knights etc which greatly effect win probability. Is the claim they found such jumps? (eyeing the results nothing really stands out in the plots).
Or is the claim that the NMF somehow proves that AlphaZero works off concepts? To me that seems suspicious as NMF is looking at weight matrices at a very crude level, it seems.
I ask this partially because I went to a meetup talk (not recorded sadly) where a researcher from MIT showed a go problem that alphaGo can't solve but which even beginner go players can solve, which shows that alphaGo actually doesn't understand things the same way as humans. Hopefully they will publish their work soon so I can show you.
Huh, that's pretty cool, thanks for sharing.
This is pretty interesting. There is a lot to quibble about here, but overall I think the information about bees here is quite valuable for people thinking about where AI is at right now and trying to extrapolate forward.
A different approach, perhaps more illuminating would be to ask how much of a bee's behavior could we plausibly emulate today by globing together a bunch of different ML algorithms into some sort of virtual bee cognitive architecture - if say we wanted to make a drone that behaved like a bee ala Black Mirror. Obviously that's a much more complicated question, though.
I feel compelled to mention my friend Logan Thrasher Collins' paper, The case for emulating insect brains using anatomical "wiring diagrams" equipped with biophysical models of neuronal activity. He thinks we may be able to emulate the fruit fly brain in about 20 years at near-full accuracy, and this estimate seems quite plausible.
There were a few sections I skipped, if I have time I'll come back and do a more thorough reading and give some more comments.
The compute comparison seems pretty sketchy to me. A bee's visual cortex can classify many different things, and the part responsible for doing the classification task in the few shot learning study is probably just a small subset. [I think below Rohin made a similar point below.] Deep learning models can be pruned somewhat without loosing much accuracy, but generally all the parameters are used. Another wrinkle is the rate of firing activity in the visual cortex depends on the input, although there is a baseline rate too. The point I'm getting at is it's sort of an apples-to-oranges comparison. If the bee only had to do the one task in the study to survive, evolution probably would have found a much more economical way of doing it, with far fewer neurons.
My other big quibble I have is I would have made transparent that Cotra's biological anchors method for forecasting TAI assumes that we will know the right algorithm before the hardware becomes available. That is a big questionable assumption and thus should be stated clearly. Arguably algorithmic advancement in AI at the level of core algorithms (not ML-ops / dev ops / GPU coding) is actually quite slow. In any case, it just seems very hard to predict algorithmic advancement. Plausibly a team at DeepMind might discover the key cortical learning algorithm underlying human intelligence tomorrow, but there's other reasons to think it could take decades.
Another point is that when you optimize relentlessly for one thing, you have might have trouble exploring the space adequately (get stuck at local maxima). That's why RL agents/algorithms often take random actions when they are training (they call this "exploration" instead of "exploitation"). Maybe random actions can be thought of as a form of slack? Micro-slacks?
Look at Kenneth Stanley's arguments about why objective functions are bad (video talk on it here). Basically he's saying we need a lot more random exploration. Humans are similar - we have an open-ended drive to explore in addition to drives to optimize a utility function. Of course maybe you can argue the open-ended drive to explore is ultimately in the service of utility optimization, but you can argue the same about slack, too.
Bostrom talks about this in his book "Superintelligence" when he discusses the dangers of Oracle AI. It's a valid concern, we're just a long way from that with GPT-like models, I think.
I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn't interpret streams from cameras. But then I realized that probably in it's training data is text on how to program a CNN. So in theory a system trained on only text could build a CNN algorithm inside itself and use that to learn how to interpret vision streams. Theoretically. A lot of stuff is theoretically possible with future AI, but how easy it is to realize in practice is a different story.
I just did some tests... it works if you go to settings and click "Activate Markdown Editor". Then convert to Markdown and re-save (note, you may want to back up before this, there's a chance footnotes and stuff could get messed up).
$stuff$ for inline math and double dollar signs for single line math work when in Markdown mode. When using the normal editor, inline math doesn't work, but $$ works (but puts the equation on a new line).
I have mixed feelings on this. I have mentored ~5 undergraduates in the past 4 years and observed many others, and their research productivity varies enormously. How much of that is due to IQ vs other factors I really have no idea. My personal feeling was most of the variability was due to life factors like the social environment (family/friends) they were ensconced in and how much time that permitted them to focus on research.
My impression from TAing physics for life scientists for two years was that a large number felt they were intrinsically bad at math. That's really bad! We need to be spreading more growth mindset ideas, not the idea that you're limited by your IQ. Or at the very least, the idea that math doesn't have to come naturally or be easy for you to be a scientist or engineer. I struggled with math my entire way through undergrad and my PhD. If the drive I developed as a child to become a scientist wasn't so strong, I'm sure I would have dropped out.
My feeling is we are more bottlenecked on great engineers than sciences. [Also, the linear model (science -> invention -> engineering/innovation) is wrong!] Also, we should bring back inventors - that should be a thing again.
I think it would be awesome if some day 50% of people were engineers and inventors. People with middling IQ can still contribute a lot! Maybe not to theoretical physics, but to many other areas! We hear a lot of gushing things about scientific geniuses, especially on this site and I think we discount the importance of everyday engineers and also people like lab techs and support staff, which are increasingly important as science becomes more multidisciplinary and collaborative.
I liked how in your AISS support talk you used history as a frame for thinking about this because it highlights the difficulty of achieving superhuman ethics. Human ethics (for instance as encoded in laws/rights/norms) is improving over time, but it's been a very slow process that involves a lot of stumbling around and having to run experiments to figure out what works and what doesn't. "The Moral Arc" by Michael Shermer is about the causes of moral progress... one of them is allowing free speech, free flow of ideas. Basically, it seems moral progress requires a culture that supports conjecture and criticism of many ideas - that way you are more likely to find the good ideas. How you get an AI to generate new ideas is anyone's guess - "creativity" in AI is pretty shallow right now - I am not aware of any AI having invented anything useful. (There have been news reports about AI systems that have found new drugs, but the ones I've seen were actually later called out as just slight modifications of existing drugs that were in their training data and thus they were not super creative).
To be honest I only read sections I-III of this post.
I have a comment on this:
An even more speculative thing to try would be auto-supervision. A language model can not only be asked to generate text about ethical dilemmas, it can also be asked to generate text about how good different responses to ethical dilemmas are, and the valence of the response can be used as a reinforcement signal on the object-level decision.
This is a nice idea. It's easy to implement and my guess is it should improve consistency. I actually saw something similar done in computer vision - someone took the labels generated by a CNN on a previously unlabeled dataset and then used those to fine-tune the CNN. Surprisingly, the result was a slightly better model. I think what that process does is encourage consistency across a larger swatch of data. I'm having trouble finding the paper right now however and I have no idea if the result replicated. If you would like I can try to find it - I think it was in the medical imaging domain where data labeled with ground truth labels is scarce, so if you can train on autogenerated ("weak") labels then that is super useful.
It's a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
The early writings of Bostom and Yudkowsky I would classify as a mix of scientifically informed futurology and philosophy. As with science fiction, they are laying out what might happen. There is no science of psychohistory and while there are better and worse ways of forecasting the future (see "Superforecasting") when it comes to forecasting how future technology will play out it's especially impossible because future technology depends on knowledge we by definition don't have right now. Still, the work has value even if it is not scientific, by alerting us to what might happen. It is scientifically informed because at the very least the futures they describe don't violate any laws of physics. That sort of futurology work I think is very valubale because it explores the landscape of possible futures so we can identify the futures we don't want so we we can takes steps to avoid those futures, even if the probability of any given future scenario is not clear.
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn't make it pseudoscience. Falsifiability is the key to demarcation. The work that borders on pseudoscience revolves heavily around the construction of what I call "free floating" systems. These are theoretical systems that are not tied into existing scientific theory (examples: laws of physics, theory of evolution, theories of cognition, etc) and also not grounded in enough detail that we can test whether the ideas / theories are useful/correct right now. They aren't easily falsifiable. These free-floating sets of ideas tend to be hard for outsiders to learn since they involve a lot of specialized jargon and because sorting wheat from chaffe is hard because they don't bother to subject their work to the rigors of peer review and publication in conferences / journals, which provide valuable signals to outsiders as to what is good or bad (instead we end up with a huge lists of Alignment Forum posts and other blog posts and PDFs with no easy way of figuring out what is worth reading). Some of this type of work blends into abstract mathematics. Safety frameworks like iterated distillation & debate, iterated amplification, and a lot of the MIRI work on self-modifying agents seem pretty free-floating to me (some of these ideas may be testable in some sort of absurdly simple toy environment today, but what these toy models tell us about more general scenarios is hard to say without a more general theory). A lot of the futurology stuff is also free floating (a hallmark of free floating stuff is zany large concept maps like here). These free floating things are not worthless but they also aren't scientific.
Finally, there's much that is philosophy. First, of course, there's debates about ethics. Secondly there's debates about how to define basic terms that are heavily used like intelligence, general vs narrow intelligence, information, explanation, knowledge, and understanding.
The paper you cited does not show this.
Yeah, you're right I was being sloppy. I just crossed it out.
oo ok, thanks, I'll take a look. The point about generative models being better is something I've been wanting to learn about, in particular.
SGD is a form of efficient approximate Bayesian updating.
Yeah I saw you were arguing that in one of your posts. I'll take a closer look. I honestly have not heard of this before.
Regarding my statement - I agree looking back at it it is horribly sloppy and sounds absurd, but when I was writing I was just thinking about how all L1 and L2 regularization do is bias towards smaller weights - the models still take up the same amount of space on disk and require the same amount amount of compute to run in terms of FLOPs. But yes you're right they make the models easier to approximate.
By the way, if you look at Filan et al.'s paper "Clusterability in Neural Networks" there is a lot of variance in their results but generally speaking they find that L1 regularization leads to slightly more clusterability than L2 or dropout.
The idea that using dropout makes models simpler is not intuitive to me because according to Hinton dropout essentially does the same thing as ensembling. If what you end up with is something equivalent to an ensemble of smaller networks than it's not clear to me that would be easier to prune.
One of the papers you linked to appears to study dropout in the context of Bayesian modeling and they argue it encourages sparsity. I'm willing to buy that it does in fact reduce complexity/ compressibility but I'm also not sure any of this is 100% clear cut.
(responding to Jacob specifically here) A lot of things that were thought of as "obvious" were later found out to be false in the context of deep learning - for instance the bias-variance trade-off.
I think what you're saying makes sense at a high/rough level but I'm also worried you are not being rigorous enough. It is true and well known that L2 regularization can be derived from Bayesian neural nets with a Gaussian prior on the weights. However neural nets in deep learning are trained via SGD, not with Bayesian updating -- and it doesn't seem modern CNNs actually approximate their Bayesian cousins that well - otherwise they would be better calibrated I would think. However, I think overall what you're saying makes sense.
If we were going to really look at this rigorously we'd have to define what we mean by compressibility too. One way might be via some type of lossy compression using model pruning or some form of distillation. Have their been studies showing models that use Dropout can be pruned down more or distilled easier?
Hey, OK, fixed. Sorry there is no link to the comment -- I had a link in an earlier draft but then it got lost. It was a comment somewhere on LessWrong and now I can't find it -_-.
That's interesting it motivated you to join Anthropic - you are definitely not alone in that. My understanding is Anthropic was founded by a bunch of people who were all worried about the possible implications of the scaling laws.
To my knowledge the most used regularization method in deep learning, dropout, doesn't make models simpler in the sense of being more compressible.
A simple L1 regularization would make models more compressible in so far as it suppresses weights towards zero so they can just be thrown out completely without affecting model performance much. I'm not sure about L2 regularization making things more compressible - does it lead to flatter minima for instance? (GPT-3 uses L2 regularization, which they call "weight decay").
But yes, you are right, Occam factors are intrinsic to the process of Bayesian model comparison, however that's in the context of fully probabilistic models, not comparing deterministic models (ie Turing programs) which is what is done in Solomonoff induction. In Solomonoff induction they have to tack Occam's razor on top.
I didn't state my issues with Solomonoff induction very well, that is something I hope to summarize in a future post.
Overall I think it's not clear that Solomonoff induction actually works very well once you restrict it to a finite prior. If the true program isn't in the prior, for instance, there is no guarantee of convergence - it may just oscillate around forever (the "grain of truth" problem).
There's other problems too (see a list here, the "Background" part of this post by Vanessa Kosoy, as well as Hutter's own open problems).
One of Kosoy's points, I think, is something like this - if an AIXI-like agent has two models that are very similar but one has a weird extra "if then" statement tacked on to help it understand something (like at night the world stops existing and the laws of physics no longer apply, when in actuality the lights in the room just go off) then it may take an extremely long time for an AIXI agent to converge on the correct model because the difference in complexity between the two models is very small.
I think this is a nice line of work. I wonder if you could add a simple/small constraint on weights that avoids the issue of multimodal neurons -- it seems doable.
I just wanted to say I don't think you did anything ethically wrong here. There was a great podcast with Diana Fleischman I listened to a while ago where she talked about how we manipulate other people all the time especially in romantic relationships. I'm uncomfortable saying that any manipulation whatsoever is ethically wrong because I think that's demanding too much cognitive overhead for human relationships (and also makes it hard to raise kids) - I think you have to have a figure out a more nuanced view. For instance, having a high level rule on what forms of manipulation are allowed that balances protecting individual's agency and autonomy while allowing for small forms of manipulation, and then judging the small manipulations that are allowed by the rule individually on their consequences.
You sound very confident your device would have worked really well. I'm curious, how much testing did you do?
I have a Garmin Vivosmart 3 and it tries to detect when I'm either running, biking, or going up stairs. It works amazingly well considering the tiny amount of hardware and battery power it has, but it also fails sometimes, like randomly thinking I've been running for a while when I've been doing some other high heart rate thing. Maddeningly, I can't figure out how to turn off some of the alerts, like when I've met my "stair goal" for the day.