Posts

My AI Predictions 2023 - 2026 2023-10-16T00:50:52.968Z
Minimum Viable Alignment 2022-05-07T13:18:49.850Z
Plausible A.I. Takeoff Scenario Short Story 2020-01-01T04:35:41.273Z

Comments

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2024-01-20T06:44:42.199Z · LW · GW

10x per year for compute seems high to me. Naïvely I would expect the price/performance of compute to double every 1-2 years as it has been forever, with overall compute available for training big models being a function of that + increasing investment in the space, which could look more like one-time jumps. (I.e. a 10x jump in compute in 2024 may happen because of increased investment, but a 100x increase by 2025 seems unlikely.) But I am somewhat uncertain of this.

For parameters, I definitely think the largest models will keep getting bigger, and for compute to be the big driver of that -- but also I would expect improvements like mixture of experts models to continue, which effectively allow more parameters with less compute (because not all of the parameters are used at all times). Other techniques, like RLHF, also improve the subjective performance of models without increasing their size (i.e. getting them to do useful things rather than only predict what next word is most likely).

I guess my prediction here would be simply that things like this continue, so that in 2025 if you have X compute, you could get a better model in 2025 than you could in 2023. But you also could have 5x to 50x more compute in 2025, so you have the sum of those improvements!

It's obviously far cheaper to play with smaller models, so I expect lots of improvements will initially appear in models small-for-their-time. 

Just my thoughts!

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-18T00:36:44.426Z · LW · GW

I wrote this late at night, so to clarify and expand a little bit;

- "Work on more than one time scale" I think is actually an interesting idea to dwell on for a second. Like, when a person is trying to solve a problem, they will often pace back and forth, or talk, etc. They don't have to do everything in one pass, somehow the complex computation which lets them see and move around can work on a very fast time scale, while other problem solving is going on simultaneously, and only starts to effect motor outputs later on. That's interesting. The spinal cord doing processing independent of the brain thing I mentioned is evident in this older series of (rather horrible) experiments with cats: https://www.jstor.org/stable/24945006

- On the 'smaller models with lower latency', we already now see models like Minstral-7b outperforming 30b parameter models because of improvements in data, architecture, and training. I expect this trend to continue. If the largest models are capable of operating a robot out of the box, I think you could take those outputs, and use them to train (or otherwise distill down) the larger model to a more manageable size, more specialised for the task.

- On the 'LLMs could do the parts with higher latency', just yesterday I saw somebody do something like this with GPT-4V, where they periodically uploaded a photograph of what was in front of them, and got GPT-4V to output instructions on how to find the super market (walk further forward, turn right, etc). Kind of worked, that's the sort of thing I was picturing here, leaving much more responsive systems to handle the low latency work, like balance, gripping, etc.
 

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-18T00:27:21.494Z · LW · GW

I'm somewhat skeptical that running out of text data will meaningfully slow progress. Today's models are so sample inefficient compared with human brains that I suspect there are significant jumps possible there. 

Also, as you say;
- Synthetic text data might well be possible (especially for domains where you can test the quality of the produced text externally (e.g. programming)
- Reinforcement-learning-style virtual environments can also generate data (and not necessarily only physics based environments either -- it could be more like playing games or using a computer).
- And multimodal inputs gives us a lot more data too, and I think we've only really scratched the surface of multimodal transformers today.

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-16T21:19:44.297Z · LW · GW

I am honestly very surprised it became a front page post too! It totally is just speculation.

I tried to be super clear that these were just babbled guesses, and I was mainly just telling people to try to do same, rather than trusting my starting point here.

The other thing that surprised me is that there haven't been too many comments saying "this part is off", or "you missed trend X!". I was kind of hoping for that!

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-16T14:13:19.081Z · LW · GW

Agree on lower depth models being possible, a few other possibilities:

  • Smaller models with lower latency could be used, possibly distilled down from larger ones.

  • Compute improvements might make it practical onboard (like with Tesla's self-driving hardware inside the chest of their andriod).

  • New architectures could work on more than one time scale -- kind of like humans do. E.g. when we walk, not all of the processing is done in the brain. Your spinal cord can handle a tonne of it autonomously. (Will find source tomorrow).

  • LLM-type models could do the parts that can accept higher latency, leaving lower level processes to handle themselves. Imagine for a household cleaning robot that a LLM based agent puts out high level thoughts like "Scan the room for dirty clothes. ... Fold them. ... Put them in the third draw", and existing low level stuff actually carried out the instructions. That's an exaggerated example, but you get the idea, it doesn't have to replace the PID controller!

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-16T06:05:08.076Z · LW · GW

I am extremely worried about safety, but I don't know as much about it as I do about what's on the edge of consumer / engineering trends, so I think my predictions here would be not useful to share right now! The main way it relates to my guesses here is if regulation successfully slows down frontier development within a few years (which I would support).

I'm doing the ARENA course async online at the moment, and possibly moving into alignment research in the next year or two, so hoping to be able to chat more intelligently on alignment soonish.

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-16T06:00:45.369Z · LW · GW

I broadly agree. I think AI tools are already speeding up development today, and on reflection I don't actually think AI being more capable than humans at modeling the natural world would be a discontinuous point on the ramp up to superintelligence, actually. 

It would be a point where AI gets much harder to predict, though, which is probably why it was on my mind when I was trying to come up with predictions.

Comment by HunterJay on My AI Predictions 2023 - 2026 · 2023-10-16T05:56:38.739Z · LW · GW

Thanks, fixed. I did mean 3.5 to 4, not 3 to 4.

Comment by HunterJay on The Dictatorship Problem · 2023-06-11T05:44:51.671Z · LW · GW

Side note -- France isn't a great example for your point here "France, for example, is a very old, well-established and liberal democracy." because the Fifth Republic was only established in 1958. It's also notable for giving the president much stronger executive powers compared with the Fourth Republic!

Comment by HunterJay on Open & Welcome Thread - May 2022 · 2022-05-25T14:07:32.722Z · LW · GW

In the spirit of doing low status things with high potential, I am working on a site to allow commissioning of fringe erotica and am looking to hire a second web developer.

The idea is to build a place where people with niche interests can post bounties for specific stories. In my time moonlighting as an erotic author, I've noticed a lack of good sites to do freelance erotic writing work. I think the reason for this is that most people think porn is icky, so despite there being a huge market for extremely niche content, the platforms currently available are pretty abysmal. This is our opportunity.

We're currently in beta and can pay a junior-level wage, with senior-level equity. If you're a web developer who wants to join a fully remote startup, please reach out. 

As with my other startups, I began this project with the goal of generating wealth to put towards alignment research.

Comment by HunterJay on Minimum Viable Alignment · 2022-05-09T07:54:31.255Z · LW · GW

Thanks Chris, but I think you linked to the wrong thing there, I can't see your post in the last 3 years of your history either!

Comment by HunterJay on Minimum Viable Alignment · 2022-05-08T13:55:30.793Z · LW · GW

Aye, I agree it is not a solution to avoiding power seeking, only that there may be a slightly easier target to hit if we can relax as many constraints on alignment as possible.

Comment by HunterJay on Minimum Viable Alignment · 2022-05-08T13:54:12.995Z · LW · GW

Will check them out, thank you.

Comment by HunterJay on Make a Movie Showing Alignment Failures · 2022-04-15T02:36:00.737Z · LW · GW

I like this story pitch! It seems pretty compelling to me, and a clever way to show the difficulty and stakes of alignment. Good luck!

Comment by HunterJay on Yoshua Bengio on AI progress, hype and risks · 2021-11-03T23:42:32.343Z · LW · GW

I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!

Comment by HunterJay on Review of "Why AI is Harder Than We Think" · 2021-05-01T02:43:51.252Z · LW · GW

I took the original sentence to mean something like "we use things external to the brain to compute things too", which is clearly true. Writing stuff down to work through a problem is clearly doing some computation outside of the brain, for example. The confusion comes from where you draw the line -- if I'm just wiggling my fingers without holding a pen, does that still count as computing stuff outside the brain? Do you count the spinal cord as part of the brain? What about the peripheral nervous system? What about information that's computed by the outside environment and presented to my eyes? I think it's kind of an arbitrary line, but reading this charitably their statement can still be correct, I think.

(No response from me on the rest of your points, just wanted to back the author up a bit on this one.)

Comment by HunterJay on What a 20-year-lead in military tech might look like · 2020-07-30T02:12:24.404Z · LW · GW

I really enjoy'd this writeup! I'd probably even go a little bit on the pessimistic (optimistic?) side, and bet that almost all of this technology would be possible with only a few years of development from today -- though I suppose it might be 20 if development doesn't start/ramp up in earnest.

Comment by HunterJay on Plausible A.I. Takeoff Scenario Short Story · 2020-01-02T01:22:00.130Z · LW · GW

Thanks!

Comment by HunterJay on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T12:07:21.613Z · LW · GW

That's a good point, I'll write up a brief explanation/disclaimer and put it in as a footnote.

Comment by HunterJay on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T11:29:42.481Z · LW · GW

Typo corrected, thanks for that.

I agree, it's more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.

In any case, it's a story, not a prediction, and I'd defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I'm just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety -- both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?