Posts

Special AI Meetup feat Joscha Bach & Rachel St. Clair - are LLMs enough? 2024-04-05T12:54:38.884Z
AI futurists ponder AI and the future of humanity - should we merge with AI? 2024-03-12T15:45:58.228Z
Book Swap 2023-09-07T12:25:40.434Z
Alex Hoekstra talk on Open Source Vaccines and the Mind First Foundation 2023-05-27T16:06:06.750Z
Board games @ Aeronaut Brewing 2023-04-19T18:05:16.038Z
"The AI Safety Problem" - talk by Richard Ngo at Harvard 2023-04-04T18:26:02.297Z
Boston ACX Spring Schelling Point Meetup 2023-03-28T21:16:27.272Z
Board games @ Aeronaut Brewing 2022-12-29T15:48:57.629Z
Rationality reading group - Free books! 2022-10-27T19:48:23.236Z
Board Games at Aeronaut Brewing 2022-10-20T23:48:01.044Z
Board gaming @ Aeronaut Brewing 2022-09-26T19:17:41.540Z
Jason Crawford on the Progress Studies movement 2022-09-09T19:59:47.527Z
Boston ACX board games @ Aeronaut brewing 2022-06-01T00:18:26.759Z
Boston ACX Board Gaming 2022-04-16T00:27:26.608Z
Boston ACX meetup with special guest Mingyuan + Harvard Art Museums at Night 2022-04-16T00:25:16.262Z
Boston ACX Spring Schelling Point Meetup 2022-04-05T13:58:06.405Z
Walk on esplanade followed by food/drinks in Kendall Square 2022-02-09T13:12:49.414Z
How I'm thinking about GPT-N 2022-01-17T17:11:49.447Z
Hike around Middlesex Fells Reservation 2022-01-03T20:14:03.850Z
Responding to common objections to the FDA unbanning Paxlovid *right now* 2021-11-26T17:18:51.499Z
Boston ACX meetup - lightning talks 2021-11-24T20:16:49.619Z
Does anyone know what Marvin Minsky is talking about here? 2021-11-19T00:56:25.153Z
Possible research directions to improve the mechanistic explanation of neural networks 2021-11-09T02:36:30.830Z
Boston Astral Codex Ten Nov 20th "Friendsgiving" gathering 2021-11-07T14:13:17.418Z
October Boston Astral Codex Ten Meetup 2021-10-15T15:49:40.685Z
Boston/Cambridge, MA – ACX Meetups Everywhere 2021 2021-08-23T08:53:10.366Z
*Date changed* Meeting at JFK park near Harvard 2021-08-16T19:41:30.350Z
Welcome to Boston Astral Codex Ten 2021-08-16T19:22:37.096Z

Comments

Comment by delton137 on AI futurists ponder AI and the future of humanity - should we merge with AI? · 2024-04-03T02:25:17.151Z · LW · GW

hah... actually not a bad idea... too late now. BTW the recording will be available eventually if you're interested.

Comment by delton137 on AI futurists ponder AI and the future of humanity - should we merge with AI? · 2024-04-03T01:55:19.222Z · LW · GW

Hi, organizer here. I just saw your message now right after the event. There were a couple people from Microsoft there but I'm not sure if they were interested in alignment research. This was mostly a general audience at this event, mostly coming through the website AIcamp.ai. We also had some people from the local ACX meetup and transhumanist meetup. PS: I sent you an invitation to connect on LinkedIN, let's stay in touch (I'm https://www.linkedin.com/in/danielelton/). 

Comment by delton137 on Book Swap · 2023-09-08T22:15:29.209Z · LW · GW

Unfortunately they have a policy that they check ID at the door and only allow those over 21 in. I'm going to update the post now to make this clear. Even when the outdoor patio is open it's still only 21+. 

The way I would describe it now is there's a large bar in the main room, and then there's a side room (which is also quite large) with a place that serves Venezeualian Food (very good), and Somerville Chocolate (they make and sell chocolate there).  

The age restriction has never been a problem in the past although I do vaguely recall someone mentioning it once.  I'm going to try to have future meetups I run at a public library (probably Cambridge Public Library), its just tricky getting the room reservations sometimes. We have been thinking of trying out the food court in Cambrideside mall, also, although the tables there are rather small and I don't think they can be moved and joined together (from what I remember). 

Comment by delton137 on Alex Hoekstra talk on Open Source Vaccines and the Mind First Foundation · 2023-07-11T16:29:03.700Z · LW · GW

Sorry for the late reply. In the future we will try to have a Zoom option for big events like this. 

We did manage to record it, but the audio isn't great (and we didn't cover the Q&A)
 

Comment by delton137 on Self-experiment Protocol: Effect of Chocolate on Sleep · 2023-07-07T18:24:22.766Z · LW · GW

This is pretty interesting.. any outcome you can share? (I'll bug you about this next time I see you in person so you can just tell me then rather than responding, if you'd like)

Good idea to just use the time you fall asleep rather than the sleep stage tracking, which isn't very accurate. I think the most interesting metric is just boring old total sleep time (unfortunately sleep trackers in my experience are really bad at actually capturing sleep quality.. but I suppose if there's a sleep quality score you have found useful that might be interesting to look at also). Something else I've noticed is that by looking at the heart rate you can often get a more accurate idea of when you feel asleep and woke up. 

Comment by delton137 on The “mind-body vicious cycle” model of RSI & back pain · 2022-12-13T00:57:30.504Z · LW · GW

I would modify the theory slightly by noting that the brain may become hypersensitive to sensations arising from the area that was originally damaged, even after it has healed. Sensations that are otherwise normal can then trigger pain. I went to the website about pain reprocessing therapy and stumbled upon an interview with Alan Gordon where he talked about this.  I suspect that high level beliefs about tissue damage etc play a role here also in causing the brain to become hyper focused on sensations coming from a particular region and to interpret them as painful. 

Something else that comes to mind here is the rubber hand illusion. Watch this video - and look at the flinches! Interesting, eh? 

edit: (ok, the rubber hand illusion isn't clearly related, but it's interesting!) 

Comment by delton137 on The “mind-body vicious cycle” model of RSI & back pain · 2022-12-13T00:41:20.128Z · LW · GW

That's really cool, thanks for sharing!

Comment by delton137 on Solstice 2022 Roundup · 2022-12-03T20:36:35.132Z · LW · GW

Since nobody else posted these: 

Bay Area is Sat Dec 17th (Eventbrite) (Facebook)

South Florida (about an hour north of Miami) is Sat Dec 17th (Eventbrite) (Facebook)

Comment by delton137 on AGI Impossible due to Energy Constrains · 2022-12-01T00:09:13.610Z · LW · GW

On current hardware, sure.

It does look like scaling will hit a wall soon if hardware doesn't improve, see this paper: https://arxiv.org/abs/2007.05558

But Gwern has responded to this paper pointing out several flaws... (having trouble finding his response right now..ugh)

However, we have lots of reasons to think Moore's law will continue ... in particular future AI will be on custom ASICs / TPUs / neuromorphic chips, which is a very different story. I wrote about this long ago, in 2015. Such chips, especially asynchronous and analog ones, can be vastly more energy efficient.

Comment by delton137 on Human-level Diplomacy was my fire alarm · 2022-11-28T13:30:03.520Z · LW · GW

I disagree, in fact I actually think you can argue this development points the opposite direction, when you look at what they had to do to achieve it and the architecture they use. 

I suggest you read Ernest Davis' overview of Cicero.  Cicero is a special-purpose system that took enormous work to produce -- a team of multiple people labored on it for three years.  They had to assemble a massive dataset from 125,300 online human games. They also had to get expert annotations on thousands of preliminary outputs. Even that was not enough.. they had to generate synthetic datasets as well to fix issues with the system! Even then, the dialogue module required a specialized filter to remove nonsense.  This is a break from the scaling idea that says to solve new problems you just need to scale existing architectures to more parameters (and train on a large enough dataset).   

Additionally, they argue that this system appears very unlikely to generalize to other problems, or even to slight modifications of the game of Diplomacy. It's not even clear how well it would generalize to non-blitz games. If the rules were modified slightly, the entire system would likely have to be retrained. 

I also want to point out that scientific research is not easy as you make it sound.  Professors spend the bulk of their time writing proposals, so perhaps AI could help there by summarizing existing literature.  Note though a typical paper, even a low-value one, generally takes a graduate student with specialized training about a year to complete, assuming the experimental apparatus and other necessary infrastructure are all in place. Not all science is data-driven either, science can also be observation-driven or theory-driven.  

Comment by delton137 on Why I'm Working On Model Agnostic Interpretability · 2022-11-12T17:51:44.551Z · LW · GW

I've looked into these methods a lot, in 2020 (I'm not so much up to date on the latest literature). I wrote a review in my 2020 paper, "Self-explaining AI as an alternative to interpretable AI". 

There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the "sanity checks" paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It's kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have seen no good arguments as to why they are better than just occlusion mapping. I think this is a symptom of people optimizing for paper publishing and trying to impress reviewers with novelty and math rather than actually building stuff that is useful. 

You may find this interesting: "Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization". What they show is that a very simple model-agnostic technique (finding the image that maximizes an output) allows people to make better predictions about how a CNN will behave than Olah's activation maximization method, which produces images that can be hard to understand. This is exactly the sort of empirical testing I suggested in my Less Wrong post from Nov last year. 

The comparison isn't super fair because Olah's techniques were designed for detailed mechanistic understanding, not allowing users to quickly be able to predict CNN behaviour.  But it does show that simple techniques can have utility for helping users understand at a high level how an AI works.

Comment by delton137 on Simulators · 2022-09-08T16:00:01.611Z · LW · GW

There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.

In brief, a few issues I see here:

  • You haven't actually provided any evidence that GPT does simulation other than "Just saying “this AI is a simulator” naturalizes many of the counterintuitive properties of GPT which don’t usually become apparent to people until they’ve had a lot of hands-on experience with generating text." What counterintuitve properties, exactly? Examples I've seen show GPT-3 is not simulating the environment being described in the text. I've seen a lot impressive examples too, but I find it hard to draw conclusions on how the model works by just reading lots and lots of outputs... I wonder what experiments could be done to test your idea that it's running a simulation. 
  • Even for very simple to simulate processes such as addition or symbol substitution, GPT has, in my view,  trouble learning them, even though it does Grok those things eventually. For things like multiplication, the accuracy it has depends on how often the numbers appear in the training data (https://arxiv.org/abs/2202.07206), which is a bit telling, I think.
  • Simulating the laws of physics is really hard.. trust me on this (I did a Ph.D. in molecular dynamics simulation). If it's doing any simulation at all, it's got to be some high level heuristic type stuff. If it's really good, it might be capable of simulating basic geometric constraints (although IIRC GPT is superb at spatial reasoning). Even humans are really bad at properly simulating physics accurately (researchers found that most people do really poorly on a test of basic physics based reasoning, like basic kinematics (will this ball curve left, right , or go straight, etc)). I imagine gradient descent is going to be much more likely to settle on shortcut rules and heuristics rather than implementing a complex simulation. 
Comment by delton137 on Stuff I might do if I had covid · 2022-07-17T14:33:45.432Z · LW · GW

Peperine (black pepper extract) can help make quercetin more bioavailable. They are co-administered in many studies on the neuroprotective effects of quercetin: https://scholar.google.com/scholar?hl=en&as_sdt=0,22&q=piperine+quercetin

Comment by delton137 on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-11T23:27:35.781Z · LW · GW

I find slower take-off scenarios more plausible. I like the general thrust of Christiano's "What failure looks like". I wonder if anyone has written up a more narrative / concrete account of that sort of scenario.

Comment by delton137 on Towards a Formalisation of Returns on Cognitive Reinvestment (Part 1) · 2022-06-05T12:56:06.039Z · LW · GW

The thing you are trying to study ("returns on cognitive reinvestment") is probably one of the hardest things in the world to understand scientifically. It requires understanding both the capabilities of specific self-modifying agents and the complexity of the world. It depends what problem you are focusing on too -- the shape of the curve may be very different for chess vs something like curing disease. Why? Because chess I can simulate on a computer, so throwing more compute at it leads to some returns. I can't simulate human biology in a computer - we have to actually have people in labs doing complicated experiments just to understand one tiny bit of human biology.. so having more compute / cognitive power in any given agent isn't necessarily going to speed things along.. you also need a way of manipulating things in labs (either humans or robots doing lots of experiments). Maybe in the future an AI could read massive numbers of scientific papers and synthesize them into new insights, but precisely what sort of "cognitive engine" is required to do that is also very controversial (could GPT-N do it?).

Are you familiar with the debate about Bloom et al and whether ideas are getting harder to find? (https://guzey.com/economics/bloom/ , https://www.cold-takes.com/why-it-matters-if-ideas-get-harder-to-find/). That's relevant to predicting take-off.

The other post I always point people too is this one by Chollet.

I don't necessarily agree with it but I found it stimulating and helpful for understanding some of the complexities here.

So basically, this is a really complex thing.. throwing some definitions and math at it isn't going to be very useful, I'm sorry to say. Throwing math and definitions at stuff is easy. Modeling data by fitting functions is easy. Neither is very useful in terms of actually being able to predict in novel situations (ie extrapolation / generalization), which is what we need to predict AI take-off dynamics. Actually understanding things mechanistically and coming up with explanatory theories that can withstand criticism and repeated experimental tests is very hard. That's why typically people break hard questions/problems down into easier sub-questions/problems.

Comment by delton137 on The Problem With The Current State of AGI Definitions · 2022-06-02T13:08:02.938Z · LW · GW

How familiar are you with Chollet's paper "On the Measure of Intelligence"? He disagrees a bit with the idea of "AGI" but if you operationalize it as "skill acquisition efficiency at the level of a human" then he has a test called ARC which purports to measure when AI has achieved human-like generality.

This seems to be a good direction, in my opinion. There is an ARC challenge on Kaggle and so far AI is far below the human level. On the other hand, "being good at a lot of different things", ie task performance across one or many tasks, is obviously very important to understand and Chollet's definition is independent from that.

Comment by delton137 on Boston ACX Board Gaming · 2022-04-17T18:27:25.257Z · LW · GW

Thanks, it's been fixed!!

Comment by delton137 on Projecting compute trends in Machine Learning · 2022-04-01T17:40:02.910Z · LW · GW

Interesting, thanks. 10x reduction in cost every 4 years is roughly twice what I would have expected. But it sounds quite plausible especially considering AI accelerators and ASICs.

Comment by delton137 on Projecting compute trends in Machine Learning · 2022-03-09T00:53:15.215Z · LW · GW

Thanks for sharing! That's a pretty sophisticated modeling function but it makes sense. I personally think Moore's law (the FLOPS/$ version) will continue, but I know there's a lot of skepticism about that.

Could you make another graph like Fig 4 but showing projected cost, using Moore's law to estimate cost? The cost is going to be a lot, right?

Comment by delton137 on ML Systems Will Have Weird Failure Modes · 2022-02-18T16:25:34.185Z · LW · GW

Networks with loops are much harder to train.. that was one of the motivations for going to transformers instead of RNNs. But yeah, sure, I agree. My objection is more that posts like this are so high level I have trouble following the argument, if that makes sense. The argument seems roughly plausible but not making contact with any real object level stuff makes it a lot weaker, at least to me. The argument seems to rely on "emergence of self-awareness / discovery of malevolence/deception during SGD" being likely which is unjustified in my view. I'm not saying the argument is wrong, more that I personally don't find it very convincing.

Comment by delton137 on February 2022 Open Thread · 2022-02-18T16:13:14.249Z · LW · GW

Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the real world" and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed - even from top firms - for instance Microsoft's Tay and Google Health's system for detecting diabetic retinopathy. Another example is Google's Duplex. And for how long have we heard about burger flipping robots taking people's jobs?

There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There's also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I'm pretty skeptical about that.

I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.

Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can't see that far in the future, since it depends on discoveries and scientific knowledge we don't have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).

Comment by delton137 on Interest in a digital LW "book" club? · 2022-02-09T20:41:20.636Z · LW · GW

I'm interested!

Comment by delton137 on Six Specializations Makes You World-Class · 2022-02-03T13:30:53.602Z · LW · GW

This is a shot in the dark, but I recall there was a blog post that made basically the same point visually, I believe using Gaussian distributions. I think the number they argued you should aim for was 3-4 instead of 6. Anyone know what I'm talking about?

Comment by delton137 on How I'm thinking about GPT-N · 2022-02-01T01:19:28.283Z · LW · GW

Hi, I just wanted to say thanks for the comment / feedback. Yeah, I probably should have separated out the analysis of Grokking from the analysis of emergent behaviour during scaling. They are potentially related - at least for many tasks it seems Grokking becomes more likely as the model gets bigger. I'm guilty of actually conflating the two phenomena in some of my thinking, admittedly.

Your point about "fragile metrics" being more likely to show Grokking great. I had a similar thought, too.

Comment by delton137 on ML Systems Will Have Weird Failure Modes · 2022-01-30T19:56:01.387Z · LW · GW

I think a bit too much mindshare is being spent on these sci-fi scenario discussions, although they are fun.

Honestly I have trouble following these arguments about deception evolving in RL. In particular I can't quite wrap my head around how the agent ends up optimizing for something else (not a proxy objective, but a possibly totally orthogonal objective like "please my human masters so I can later do X"). In any case, it seems self awareness is required for the type of deception that you're envisioning. Which brings up an interesting question - can a purely feed-forward network develop self-awareness during training? I don't know about you, but I have trouble picturing it happening unless there is some sort of loop involved.

Comment by delton137 on "Acquisition of Chess Knowledge in AlphaZero": probing AZ over time · 2022-01-29T23:16:34.763Z · LW · GW

Zac says "Yes, over the course of training AlphaZero learns many concepts (and develops behaviours) which have clear correspondence with human concepts."

What's the evidence for this? If AlphaZero worked by learning concepts in a sort of step-wise manner, then we should expect jumps in performance when it comes to certain types of puzzles, right? I would guess that a beginning human would exhibit jumps from learning concepts like "control the center" or "castle early, not later".. for instance the principle "control the center", once followed, has implications on how to place knights etc which greatly effect win probability. Is the claim they found such jumps? (eyeing the results nothing really stands out in the plots).

Or is the claim that the NMF somehow proves that AlphaZero works off concepts? To me that seems suspicious as NMF is looking at weight matrices at a very crude level, it seems.

I ask this partially because I went to a meetup talk (not recorded sadly) where a researcher from MIT showed a go problem that alphaGo can't solve but which even beginner go players can solve, which shows that alphaGo actually doesn't understand things the same way as humans. Hopefully they will publish their work soon so I can show you.

Comment by delton137 on How does bee learning compare with machine learning? · 2022-01-27T00:50:15.261Z · LW · GW

Huh, that's pretty cool, thanks for sharing.

Comment by delton137 on How does bee learning compare with machine learning? · 2022-01-27T00:48:35.019Z · LW · GW

This is pretty interesting. There is a lot to quibble about here, but overall I think the information about bees here is quite valuable for people thinking about where AI is at right now and trying to extrapolate forward.

A different approach, perhaps more illuminating would be to ask how much of a bee's behavior could we plausibly emulate today by globing together a bunch of different ML algorithms into some sort of virtual bee cognitive architecture - if say we wanted to make a drone that behaved like a bee ala Black Mirror. Obviously that's a much more complicated question, though.

I feel compelled to mention my friend Logan Thrasher Collins' paper, The case for emulating insect brains using anatomical "wiring diagrams" equipped with biophysical models of neuronal activity. He thinks we may be able to emulate the fruit fly brain in about 20 years at near-full accuracy, and this estimate seems quite plausible.

There were a few sections I skipped, if I have time I'll come back and do a more thorough reading and give some more comments.

The compute comparison seems pretty sketchy to me. A bee's visual cortex can classify many different things, and the part responsible for doing the classification task in the few shot learning study is probably just a small subset. [I think below Rohin made a similar point below.] Deep learning models can be pruned somewhat without loosing much accuracy, but generally all the parameters are used. Another wrinkle is the rate of firing activity in the visual cortex depends on the input, although there is a baseline rate too. The point I'm getting at is it's sort of an apples-to-oranges comparison. If the bee only had to do the one task in the study to survive, evolution probably would have found a much more economical way of doing it, with far fewer neurons.

My other big quibble I have is I would have made transparent that Cotra's biological anchors method for forecasting TAI assumes that we will know the right algorithm before the hardware becomes available. That is a big questionable assumption and thus should be stated clearly. Arguably algorithmic advancement in AI at the level of core algorithms (not ML-ops / dev ops / GPU coding) is actually quite slow. In any case, it just seems very hard to predict algorithmic advancement. Plausibly a team at DeepMind might discover the key cortical learning algorithm underlying human intelligence tomorrow, but there's other reasons to think it could take decades.

Comment by delton137 on Aligned AI Needs Slack · 2022-01-26T14:52:15.515Z · LW · GW

Another point is that when you optimize relentlessly for one thing, you have might have trouble exploring the space adequately (get stuck at local maxima). That's why RL agents/algorithms often take random actions when they are training (they call this "exploration" instead of "exploitation"). Maybe random actions can be thought of as a form of slack? Micro-slacks?

Look at Kenneth Stanley's arguments about why objective functions are bad (video talk on it here). Basically he's saying we need a lot more random exploration. Humans are similar - we have an open-ended drive to explore in addition to drives to optimize a utility function. Of course maybe you can argue the open-ended drive to explore is ultimately in the service of utility optimization, but you can argue the same about slack, too.

Comment by delton137 on Search Is All You Need · 2022-01-26T14:43:28.658Z · LW · GW

Bostrom talks about this in his book "Superintelligence" when he discusses the dangers of Oracle AI. It's a valid concern, we're just a long way from that with GPT-like models, I think.

I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn't interpret streams from cameras. But then I realized that probably in it's training data is text on how to program a CNN. So in theory a system trained on only text could build a CNN algorithm inside itself and use that to learn how to interpret vision streams. Theoretically. A lot of stuff is theoretically possible with future AI, but how easy it is to realize in practice is a different story.

Comment by delton137 on ML Systems Will Have Weird Failure Modes · 2022-01-26T14:15:51.154Z · LW · GW

I just did some tests... it works if you go to settings and click "Activate Markdown Editor". Then convert to Markdown and re-save (note, you may want to back up before this, there's a chance footnotes and stuff could get messed up). 

$stuff$ for inline math and double dollar signs for single line math work when in Markdown mode. When using the normal editor, inline math doesn't work, but $$ works (but puts the equation on a new line). 

Comment by delton137 on Young Scientists · 2022-01-26T13:12:00.773Z · LW · GW

I have mixed feelings on this. I have mentored ~5 undergraduates in the past 4 years and observed many others, and their research productivity varies enormously. How much of that is due to IQ vs other factors I really have no idea. My personal feeling was most of the variability was due to life factors like the social environment (family/friends) they were ensconced in and how much time that permitted them to focus on research. 

My impression from TAing physics for life scientists for two years was that a large number felt they were intrinsically bad at math. That's really bad! We need to be spreading more growth mindset ideas, not the idea that you're limited by your IQ. Or at the very least, the idea that math doesn't have to come naturally or be easy for you to be a scientist or engineer. I struggled with math my entire way through undergrad and my PhD. If the drive I developed as a child to become a scientist wasn't so strong, I'm sure I would have dropped out. 

My feeling is we are more bottlenecked on great engineers than sciences. [Also, the linear model (science -> invention -> engineering/innovation) is wrong!] Also, we should bring back inventors - that should be a thing again. 

I think it would be awesome if some day 50% of people were engineers and inventors. People with middling IQ can still contribute a lot! Maybe not to theoretical physics, but to many other areas! We hear a lot of gushing things about scientific geniuses, especially on this site and I think we discount the importance of everyday engineers and also people like lab techs and support staff, which are increasingly important as science becomes more multidisciplinary and collaborative. 

Comment by delton137 on Supervised learning and self-modeling: What's "superhuman?" · 2022-01-25T14:59:00.141Z · LW · GW

I liked how in your AISS support talk you used history as a frame for thinking about this because it highlights the difficulty of achieving superhuman ethics. Human ethics (for instance as encoded in laws/rights/norms) is improving over time, but it's been a very slow process that involves a lot of stumbling around and having to run experiments to figure out what works and what doesn't.  "The Moral Arc" by Michael Shermer is about the causes of moral progress... one of them is allowing free speech, free flow of ideas. Basically, it seems moral progress requires a culture that supports conjecture and criticism of many ideas - that way you are more likely to find the good ideas. How you get an AI to generate new ideas is anyone's guess - "creativity" in AI is pretty shallow right now - I am not aware of any AI having invented anything useful. (There have been news reports about AI systems that have found new drugs, but the ones I've seen were actually later called out as just slight modifications of existing drugs that were in their training data and thus they were not super creative).

To be honest I only read sections I-III of this post. 

I have a comment on this: 

An even more speculative thing to try would be auto-supervision. A language model can not only be asked to generate text about ethical dilemmas, it can also be asked to generate text about how good different responses to ethical dilemmas are, and the valence of the response can be used as a reinforcement signal on the object-level decision.

This is a nice idea. It's easy to implement and my guess is it should improve consistency. I actually saw something similar done in computer vision - someone took the labels generated by a CNN on a previously unlabeled dataset and then used those to fine-tune the CNN. Surprisingly, the result was a slightly better model. I think what that process does is encourage consistency across a larger swatch of data. I'm having trouble finding the paper right now however and I have no idea if the result replicated. If you would like I can try to find it - I think it was in the medical imaging domain where data labeled with ground truth labels is scarce, so if you can train on autogenerated ("weak") labels then that is super useful. 

Comment by delton137 on Is AI Alignment a pseudoscience? · 2022-01-24T17:07:24.167Z · LW · GW

It's a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear. 

The early writings of Bostom and Yudkowsky I would classify as a mix of scientifically informed futurology and philosophy. As with science fiction, they are laying out what might happen. There is no science of psychohistory and while there are better and worse ways of forecasting the future (see "Superforecasting") when it comes to forecasting how future technology will play out it's especially impossible because future technology depends on knowledge we by definition don't have right now. Still, the work has value even if it is not scientific, by alerting us to what might happen. It is scientifically informed because at the very least the futures they describe don't violate any laws of physics. That sort of futurology work I think is very valubale because it explores the landscape of possible futures so we can identify the futures we don't want so we we can takes steps to avoid those futures, even if the probability of any given future scenario is not clear. 

A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn't make it pseudoscience. Falsifiability is the key to demarcation. The work that borders on pseudoscience revolves heavily around the construction of what I call "free floating" systems. These are theoretical systems that are not tied into existing scientific theory (examples: laws of physics, theory of evolution, theories of cognition, etc) and also not grounded in enough detail that we can test whether the ideas / theories are useful/correct right now. They aren't easily falsifiable. These free-floating sets of ideas tend to be hard for outsiders to learn since they involve a lot of specialized jargon and because sorting wheat from chaffe is hard because they don't bother to subject their work to the rigors of peer review and publication in conferences / journals, which provide valuable signals to outsiders as to what is good or bad (instead we end up with a huge lists of Alignment Forum posts and other blog posts and PDFs with no easy way of figuring out what is worth reading). Some of this type of work blends into abstract mathematics. Safety frameworks like iterated distillation & debate, iterated amplification, and a lot of the MIRI work on self-modifying agents seem pretty free-floating to me (some of these ideas may be testable in some sort of absurdly simple toy environment today, but what these toy models tell us about more general scenarios is hard to say without a more general theory). A lot of the futurology stuff is also free floating (a hallmark of free floating stuff is zany large concept maps like here). These free floating things are not worthless but they also aren't scientific.  

Finally, there's much that is philosophy. First, of course, there's debates about ethics. Secondly there's debates about how to define basic terms that are heavily used like intelligence, general vs narrow intelligence, information, explanation, knowledge, and understanding. 

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-18T02:16:24.593Z · LW · GW

The paper you cited does not show this.

Yeah, you're right I was being sloppy. I just crossed it out. 

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-18T01:42:31.233Z · LW · GW

oo ok, thanks, I'll take a look. The point about generative models being better is something I've been wanting to learn about, in particular. 

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-18T00:08:22.067Z · LW · GW

SGD is a form of efficient approximate Bayesian updating.

Yeah I saw you were arguing that in one of your posts. I'll take a closer look. I honestly have not heard of this before. 

Regarding my statement - I agree looking back at it it is horribly sloppy and sounds absurd, but when I was writing I was just thinking about how all L1 and L2 regularization do is bias towards smaller weights - the models still take up the same amount of space on disk and require the same amount amount of compute to run in terms of FLOPs. But yes you're right they make the models easier to approximate. 

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-17T23:27:06.967Z · LW · GW

By the way, if you look at Filan et al.'s paper "Clusterability in Neural Networks" there is a lot of variance in their results but generally speaking they find that L1 regularization leads to slightly more clusterability than L2 or dropout.

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-17T23:19:32.085Z · LW · GW

The idea that using dropout makes models simpler is not intuitive to me because according to Hinton dropout essentially does the same thing as ensembling. If what you end up with is something equivalent to an ensemble of smaller networks than it's not clear to me that would be easier to prune.

One of the papers you linked to appears to study dropout in the context of Bayesian modeling and they argue it encourages sparsity. I'm willing to buy that it does in fact reduce complexity/ compressibility but I'm also not sure any of this is 100% clear cut.

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-17T23:10:28.072Z · LW · GW

(responding to Jacob specifically here) A lot of things that were thought of as "obvious" were later found out to be false in the context of deep learning - for instance the bias-variance trade-off.

I think what you're saying makes sense at a high/rough level but I'm also worried you are not being rigorous enough. It is true and well known that L2 regularization can be derived from Bayesian neural nets with a Gaussian prior on the weights. However neural nets in deep learning are trained via SGD, not with Bayesian updating -- and it doesn't seem modern CNNs actually approximate their Bayesian cousins that well - otherwise they would be better calibrated I would think. However, I think overall what you're saying makes sense.

If we were going to really look at this rigorously we'd have to define what we mean by compressibility too. One way might be via some type of lossy compression using model pruning or some form of distillation. Have their been studies showing models that use Dropout can be pruned down more or distilled easier?

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-17T22:51:18.915Z · LW · GW

Hey, OK, fixed. Sorry there is no link to the comment -- I had a link in an earlier draft but then it got lost. It was a comment somewhere on LessWrong and now I can't find it -_-.

That's interesting it motivated you to join Anthropic - you are definitely not alone in that. My understanding is Anthropic was founded by a bunch of people who were all worried about the possible implications of the scaling laws.

Comment by delton137 on How I'm thinking about GPT-N · 2022-01-17T20:10:10.081Z · LW · GW

To my knowledge the most used regularization method in deep learning, dropout, doesn't make models simpler in the sense of being more compressible.

A simple L1 regularization would make models more compressible in so far as it suppresses weights towards zero so they can just be thrown out completely without affecting model performance much. I'm not sure about L2 regularization making things more compressible - does it lead to flatter minima for instance? (GPT-3 uses L2 regularization, which they call "weight decay").

But yes, you are right, Occam factors are intrinsic to the process of Bayesian model comparison, however that's in the context of fully probabilistic models, not comparing deterministic models (ie Turing programs) which is what is done in Solomonoff induction. In Solomonoff induction they have to tack Occam's razor on top.

I didn't state my issues with Solomonoff induction very well, that is something I hope to summarize in a future post.

Overall I think it's not clear that Solomonoff induction actually works very well once you restrict it to a finite prior. If the true program isn't in the prior, for instance, there is no guarantee of convergence - it may just oscillate around forever (the "grain of truth" problem).

There's other problems too (see a list here, the "Background" part of this post by Vanessa Kosoy, as well as Hutter's own open problems).

One of Kosoy's points, I think, is something like this - if an AIXI-like agent has two models that are very similar but one has a weird extra "if then" statement tacked on to help it understand something (like at night the world stops existing and the laws of physics no longer apply, when in actuality the lights in the room just go off) then it may take an extremely long time for an AIXI agent to converge on the correct model because the difference in complexity between the two models is very small.

Comment by delton137 on Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda · 2022-01-17T14:12:53.429Z · LW · GW

I think this is a nice line of work. I wonder if you could add a simple/small constraint on weights that avoids the issue of multimodal neurons -- it seems doable. 

Comment by delton137 on Training My Friend to Cook · 2022-01-09T15:30:14.413Z · LW · GW

I just wanted to say I don't think you did anything ethically wrong here. There was a great podcast with Diana Fleischman I listened to a while ago where she talked about how we manipulate other people all the time especially in romantic relationships. I'm uncomfortable saying that any manipulation whatsoever is ethically wrong because I think that's demanding too much cognitive overhead for human relationships (and also makes it hard to raise kids) - I think you have to have a figure out a more nuanced view. For instance, having a high level rule on what forms of manipulation are allowed that balances protecting individual's agency and autonomy while allowing for small forms of manipulation, and then judging the small manipulations that are allowed by the rule individually on their consequences. 

Comment by delton137 on The Machine that Broke My Heart · 2022-01-08T17:46:53.588Z · LW · GW

You sound very confident your device would have worked really well. I'm curious, how much testing did you do? 

I have a Garmin Vivosmart 3 and it tries to detect when I'm either running, biking, or going up stairs. It works amazingly well considering the tiny amount of hardware and battery power it has, but it also fails sometimes, like randomly thinking I've been running for a while when I've been doing some other high heart rate thing. Maddeningly, I can't figure out how to turn off some of the alerts, like when I've met my "stair goal" for the day. 

Comment by delton137 on Personal Response to Omicron · 2021-12-31T16:05:53.533Z · LW · GW

I think he's conditioning heavily on being fully vaxxed and boosted when making the comparison to the flu. Which makes sense to me. I also suspect long Covid-19 risk is much lower if you're vaxxed & boosted, based on the theory that Long Covid is caused by an inflammatory cascade that won't shut off (there's a lot of debate about what biomarkers to use but many long Covid patients have elevated markers of inflammation months later). If your symptoms are mild, you won't have that inflammatory cascade. Here's Zvi on one of the latest Long Covid papers : "To the extent that Long Covid is a non-placebo Actual Thing, this seems to strongly suggest that it will indeed scale with the severity of infection, so vaccinations and booster shots will help a lot..." 

Comment by delton137 on Should we rely on the speed prior for safety? · 2021-12-30T21:04:55.021Z · LW · GW

"I think this is important as the speed prior was considered to be, and still is by many, a very good candidate for a way of not producing deceptive models." I'm curious who has professed a belief in this.  
 

Comment by delton137 on Visible Thoughts Project and Bounty Announcement · 2021-11-30T15:15:04.157Z · LW · GW

I don't have much direct experience with transformers (I was part of some research with BERT once where we found it was really hard to use without adding hard-coded rules on top, but I have no experience with the modern GPT stuff). However, what you are saying makes a lot of sense to me based on my experience with CNNs and the attempts I've seen to explain/justify CNN behaviour with side channels (for instance this medical image classification system that also generates text as a side output). 

See also my comment on Facebook

Comment by delton137 on Visible Thoughts Project and Bounty Announcement · 2021-11-30T15:06:17.461Z · LW · GW

I think what you're saying makes a lot of sense. When assembling a good training data set, it's all about diversity. 

Comment by delton137 on Visible Thoughts Project and Bounty Announcement · 2021-11-30T15:03:20.427Z · LW · GW

Sorry, I missed that somehow. Thanks.