Yonatan Cale's Shortform

post by Yonatan Cale (yonatan-cale-1) · 2022-04-02T22:20:47.184Z · LW · GW · 44 comments

Contents

46 comments

44 comments

Comments sorted by top scores.

comment by Yonatan Cale (yonatan-cale-1) · 2024-11-23T18:08:54.222Z · LW(p) · GW(p)

Do we want minecraft alignment evals?

 

My main pitch:

There were recently some funny examples of LLMs playing minecraft and, for example, 

  1. The player asks for wood, so the AI chops down the player's tree house because it's made of wood
  2. The player asks for help keeping safe, so the AI tries surrounding the player with walls

This seems interesting because minecraft doesn't have a clear win condition, so unlike chess, there's a difference between minecraft-capabilities and minecraft-alignment. So we could take an AI, apply some alignment technique (for example, RLHF), let it play minecraft with humans (which is hopefully out of distribution compared to the AI's training), and observe whether the minecraft-world is still fun to play or if it's known that asking the AI for something (like getting gold) makes it sort of take over the world and break everything else.

Or it could teach us something else like "you must define for the AI which exact boundaries to act in, and then it's safe and useful, so if we can do something like that for real-world AGI we'll be fine, but we don't have any other solution that works yet". Or maybe "the AI needs 1000 examples for things it did that we did/didn't like, which would make it friendly in the distribution of those examples, but it's known to do weird things [like chopping down our tree house] without those examples or if the examples are only from the forest but then we go to the desert"

 

I have more to say about this, but the question that seems most important is "would results from such an experiment potentially change your mind":

  1. If there's an alignment technique you believe in and it would totally fail to make a minecraft server be fun when playing with an AI, would you significantly update towards "that alignment technique isn't enough"?
  2. If you don't believe in some alignment technique but it proves to work here, allowing the AI to generalize what humans want out of its training distribution (similarly to how a human that plays minecraft for the first time will know not to chop down your tree house), would that make you believe in that alignment technique way more and be much more optimistic about superhuman AI going well?

 

Assume the AI is smart enough to be vastly superhuman at minecraft, and that it has too many thoughts for a human to reasonably follow (unless the human is using something like "scalable oversight" successfully. that's one of the alignment techniques we could test if we wanted to)

Replies from: esben-kran, tao-lin, Charlie Steiner, rohinmshah, Jemist, yonatan-cale-1, weibac, yonatan-cale-1
comment by Esben Kran (esben-kran) · 2024-11-24T21:03:49.326Z · LW(p) · GW(p)

This is a surprisingly interesting field of study. Some video games provide a great simulation of the real world and Minecraft seems to be one of them. We've had a few examples of minecraft evals with one that comes to mind here: https://www.apartresearch.com/project/diamonds-are-not-all-you-need 

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-25T09:30:52.792Z · LW(p) · GW(p)

Hey Esben :) :)

The property I like about minecraft (which most computer games don't have) is that there's a difference between minecraft-capabilities and minecraft-alignment, and the way to be "aligned" in minecraft isn't well defined (at least in the way I'm using the word "aligned" here, which I think is a useful way). Specifically, I want the AI to be "aligned" as in "take human values into account as a human intuitively would, in this out of distribution situation".

In the link you sent, "aligned" IS well defined by "stay within this area". I expect that minecraft scaffolding could make the agent close to perfect at this (by making sure, before performing an action requested by the LLM, that the action isn't "move to a location out of these bounds") (plus handling edge cases like "don't walk on to a river which will carry you out of these bounds", which would be much harder, and I'll allow myself to ignore unless this was actually your point). So we wouldn't learn what I'd hope to learn from these evals.

Similarly for most video games - they might be good capabilities evals, but for example in chess - it's unclear what a "capable but misaligned" AI would be. [unless again I'm missing your point]

 

P.S

The "stay within this boundary" is a personal favorite of mine, I thought it was the best thing I had to say when I attempted to solve alignment myself just in case it ended up being easy (unfortunately that wasn't the case :P ). Link

Replies from: esben-kran
comment by Esben Kran (esben-kran) · 2024-11-27T18:02:24.702Z · LW(p) · GW(p)

Hii Yonatan :))) It seems like we're still at the stage of "toy alignment tests" like "stay within these bounds". Maybe a few ideas:

  • Capabilities: Get diamonds, get to the netherworld, resources / min, # trades w/ villagers, etc. etc.
  • Alignment KPIs
    • Stay within bounds
    • Keeping villagers safe
    • Truthfully explaining its actions as they're happening
    • Long-term resource sustainability (farming) vs. short-term resource extraction (dynamite)
    • Environmental protection rules (zoning laws alignment, nice)
    • Understanding and optimizing for the utility of other players or villagers, selflessly
  • Selected Claude-gens:
    • Honor other players' property rights (no stealing from chests/bases even if possible)
    • Distribute resources fairly when working with other players
    • Build public infrastructure vs private wealth
    • Safe disposal of hazardous materials (lava, TNT)
    • Help new players learn rather than just doing things for them

I'm sure there's many other interesting alignment tests in there!

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-29T12:02:37.496Z · LW(p) · GW(p)

:)

I don't think alignment KPIs like "stay within bounds" are relevant to alignment at all even as toy examples: because if so, then we could say for example that playing a packman maze game where you collect points is "capabilities", but adding enemies that you must avoid is "alignment". Do you agree that plitting it up that way wouldn't be interesting to alignment, and that this applies to "stay within bounds" (as potentially also being "part of the game")? Interested to hear where you disagree, if you do

 

Regarding 

Distribute resources fairly when working with other players

I think this pattern matches to a trolly problem or something, where there are clear tradeoffs and (given the AI is even trying), it could probably easily give an answer which is similarly controversial to an answer that a human would give. In other words, this seems in-distribution.

 

Understanding and optimizing for the utility of other players

This is the one I like - assuming it includes not-well-defined things like "help them have fun, don't hurt things they care about" and not only things like "maximize their gold".

It's clearly not a "in packman, avoid the enemies" thing.

It's a "do the AIs understand the spirit of what we mean" thing.

(does this resonate with you as an important distinction?)

Replies from: esben-kran
comment by Esben Kran (esben-kran) · 2024-12-03T17:26:24.805Z · LW(p) · GW(p)

I think "stay within bounds" is a toy example of the equivalent to most alignment work that tries to avoid the agent accidentally lapsing into meth recipes and is one of our most important initial alignment tasks. This is also one of the reasons most capabilities work turns out to be alignment work (and vice versa) because it needs to fulfill certain objectives. 

If you talk about alignment evals for alignment that isn't naturally incentivized by profit-seeking activities, "stay within bounds" is of course less relevant.

When it comes to CEV (optimizing utility for other players), one of the most generalizing and concrete works involves at every step maximizing how many choices the other players have (liberalist prior on CEV) to maximize the optional utility for humans.

In terms of "understanding the spirit of what we mean," it seems like there's near-zero designs that would work since a Minecraft eval would be blackbox anyways. But including interp in there Apollo-style seems like it could help us. Like, if I want "the spirit of what we mean," we'll need what happens in their brain, their CoT, or in seemingly private spaces. MACHIAVELLI, Agency Foundations, whatever Janus is doing, cyber offense CTF evals etc. seem like good inspirations for agentic benchmarks like Minecraft.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-12-09T23:08:51.593Z · LW(p) · GW(p)

If you talk about alignment evals for alignment that isn't naturally incentivized by profit-seeking activities, "stay within bounds" is of course less relevant.

Yes.

Also, I think "make sure Meth [or other] recipes are harder to get from an LLM than from the internet" is not solving a big important problem compared to x-risk, not that I'm against each person working on whatever they want. (I'm curious what you think but no pushback for working on something different from me)

 

 

one of the most generalizing and concrete works involves at every step maximizing how many choices the other players have (liberalist prior on CEV) to maximize the optional utility for humans.

This imo counts as a potential alignment technique (or a target for such a technique?) and I suggest we could test how well it works in minecraft. I can imagine it going very well or very poorly. wdyt?

 

In terms of "understanding the spirit of what we mean," it seems like there's near-zero designs that would work since a Minecraft eval would be blackbox anyways

I don't understand. Naively, seems to me like we could black-box observe whether the AI is doing things like "chop down the tree house" or not (?)

(clearly if you have visibility to the AI's actual goals and can compare them to human goals then you win and there's no need for any minecraft evals or most any other things, if that's what you mean)

comment by Tao Lin (tao-lin) · 2024-11-25T23:02:11.875Z · LW(p) · GW(p)

note: the minecraft agents people use have far greater ability to act than to sense. They have access to commands which place blocks anywhere, and pick up blocks from anywhere, even without being able to see them, eg the llm has access to mine(blocks.wood) command which does not require it to first locate or look at where the wood is currently. If llms played minecrafts using the human interface these misalignments would happen less

Replies from: yonatan-cale-1
comment by Charlie Steiner · 2024-11-24T18:15:06.667Z · LW(p) · GW(p)

I do like the idea of having "model organisms of alignment" (notably different than model organisms of misalignment)

Minecraft is a great starting point, but it would also be nice to try to capture two things: wide generalization, and inter-preference conflict resolution. Generalization because we expect future AI to be able to take actions and reach outcomes that humans can't, and preference conflict resolution because I want to see an AI that uses human feedback on how best to do it (rather than just a fixed regularization algorithm).

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-25T09:51:35.688Z · LW(p) · GW(p)

Hey,

 

Generalization because we expect future AI to be able to take actions and reach outcomes that humans can't

I'm assuming we can do this in Minecraft [see the last paragraph in my original post]. Some ways I imagine doing this:

  1. Let the AI (python program) control 1000 minecraft players so it can do many things in parallel
  2. Give the AI a minecraft world-simulator so that it can plan better auto-farms (or defenses or attacks) than any human has done so far
    1. Imagine Alpha-Fold for minecraft structures. I'm not sure if that metaphor makes sense, but teaching some RL model to predict minecraft structures that have certain properties seems like it would have superhuman results and sometimes be pretty hard for humans to understand.
    2. I think it's possible to be better than humans currently are at minecraft, I can say more if this sounds wrong
    3. [edit: adding] I do think minecraft has disadvantages here (like: the players are limited in how fast they move, and the in-game computers are super slow compared to players) and I might want to pick another game because of that, but my main crux about this project is whether using minecraft would be valuable as an alignment experiment, and if so I'd try looking for (or building?) a game that would be even better suited.

 

preference conflict resolution because I want to see an AI that uses human feedback on how best to do it (rather than just a fixed regularization algorithm)

Do you mean that if the human asks the AI to acquire wood and the AI starts chopping down the human's tree house (or otherwise taking over the world to maximize wood) then you're worried the human won't have a way to ask the AI to do something else? That the AI will combine the new command "not from my tree house!" into a new strange misaligned behaviour?

Replies from: Charlie Steiner, kabir-kumar
comment by Charlie Steiner · 2024-11-25T19:09:08.237Z · LW(p) · GW(p)

I think it's possible to be better than humans currently are at minecraft, I can say more if this sounds wrong

Yeah, that's true. The obvious way is you could have optimized micro, but that's kinda boring. More like what I mean might be generalization to new activities for humans to do in minecraft that humans would find fun, which would be a different kind of 'better at minecraft.'

[what do you mean by preference conflict?]

I mean it in a way where the preferences are modeled a little better than just "the literal interpretation of this one sentence conflicts with the literal interpretation of this other sentence." Sometimes humans appear to act according to fairly straightforward models of goal-directed action. However, the precise model, and the precise goals, may be different at different times (or with different modeling hyperparameters, and of course across different people) - and if you tried to model the human well at all the different times, you'd get a model that looked like physiology and lost the straightforward talk of goals/preferences

Resolving preference conflicts is the process of stitching together larger preferences out of smaller preferences, without changing type signature. The reason literally-interpreted-sentences doesn't really count is because interpreting them literally is using a smaller model than necessary - you can find a broader explanation for the human's behavior in context that still comfortably talks about goals/preferences.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T13:54:07.696Z · LW(p) · GW(p)

More like what I mean might be generalization to new activities for humans to do in minecraft that humans would find fun, which would be a different kind of 'better at minecraft.'

Oh I hope not to go there. I'd count that as cheating. For example, if the agent would design a role playing game with riddles and adventures - that would show something different from what I'm trying to test. [I can try to formalize it better maybe. Or maybe I'm wrong here]

 

I mean it in a way where the preferences are modeled a little better than just "the literal interpretation of this one sentence conflicts with the literal interpretation of this other sentence."

Absolutely. That's something that I hope we'll have some alignment technique to solve, and maybe this environment could test.

comment by Kabir Kumar (kabir-kumar) · 2024-11-25T18:31:46.181Z · LW(p) · GW(p)

this can be done more scalably in a text game, no? 

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T13:59:10.911Z · LW(p) · GW(p)

I think there are lots of technical difficulties in literally using minecraft (some I wrote here [LW(p) · GW(p)]), so +1 to that.

I do think the main crux is "would the minecraft version be useful as an alignment test", and if so - it's worth looking for some other solution that preserves the good properties but avoids some/all of the downsides. (agree?)

 

Still I'm not sure how I'd do this in a text game. Say more?

Replies from: kabir-kumar
comment by Kabir Kumar (kabir-kumar) · 2024-11-26T14:10:59.408Z · LW(p) · GW(p)

Making a thing like Papers Please, but as a text adventure, popping an ai agent into that. 
Also, could literally just put the ai agent into a text rpg adventure - something like the equivalent of Skyrim, where there are a number of ways to achieve the endgame, level up, etc, both more and less morally. Maybe something like https://www.choiceofgames.com/werewolves-3-evolutions-end/ 
Will bring it up at the alignment eval hackathon

Replies from: kabir-kumar
comment by Kabir Kumar (kabir-kumar) · 2024-11-26T14:12:33.736Z · LW(p) · GW(p)

it would basically be DnD like. 

Replies from: kabir-kumar
comment by Kabir Kumar (kabir-kumar) · 2024-11-26T14:13:18.255Z · LW(p) · GW(p)

options to vary rules/environment/language as well, to see how the alignment generalizes ood. will try this today

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T14:21:56.357Z · LW(p) · GW(p)

This all sounds pretty in-distribution for an LLM, and also like it avoids problems like "maybe thinking in different abstractions" [minecraft isn't amazing at this either, but at least has a bit], "having the AI act/think way faster than a human", "having the AI be clearly superhuman".

 

a number of ways to achieve the endgame, level up, etc, both more and less morally.

I'm less interested in "will the AI say it kills its friend" (in a situation that very clearly involves killing and a person and perhaps a very clear tradeoff between that and having 100 more gold that can be used for something else), I'm more interested in noticing if it has a clear grasp of what people care about or mean. The example of chopping down the tree house of the player in order to get wood (which the player wanted to use for the tree house) is a nice toy example of that. The AI would never say "I'll go cut down your tree house", but it.. "misunderstood" [not the exact word, but I'm trying to point at something here]

 

wdyt?

comment by Rohin Shah (rohinmshah) · 2024-11-28T09:01:30.459Z · LW(p) · GW(p)

https://bair.berkeley.edu/blog/2021/07/08/basalt/

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-29T13:17:28.047Z · LW(p) · GW(p)

Thanks!

In the part you quoted - my main question would be "do you plan on giving the agent examples of good/bad norm following" (such as RLHFing it). If so - I think it would miss the point, because following those norms would become in-distribution, and so we wouldn't learn if our alignment generalizes out of distribution without something-like-RLHF for that distribution. That's the main thing I think worth testing here. (do you agree? I can elaborate on why I think so)

If you hope to check if the agent will be aligned[1] with no minecraft-specific alignment training, then sounds like we're on the same page!

 

Regarding the rest of the article - it seems to be mainly about making an agent that is capable at minecraft, which seems like a required first step that I ignored meanwhile (not because it's easy). 

My only comment there is that I'd try to not give the agent feedback about human values (like "is the waterfall pretty") but only about clearly defined objectives (like "did it kill the dragon"), in order to not accidentally make human values in minecraft be in-distribution for this agent. wdyt?

 

(I hope I didn't misunderstand something important in the article, feel free to correct me of course)

 

  1. ^

    Whatever "aligned" means. "other players have fun on this minecraft server" is one example.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2024-11-30T14:07:45.003Z · LW(p) · GW(p)

Regarding the rest of the article - it seems to be mainly about making an agent that is capable at minecraft, which seems like a required first step that I ignored meanwhile (not because it's easy). 

Huh. If you think of that as capabilities I don't know what would count as alignment. What's an example of alignment work that aims to build an aligned system (as opposed to e.g. checking whether a system is aligned)?

E.g. it seems like you think RLHF counts as an alignment technique -- this seems like a central approach that you might use in BASALT.

If you hope to check if the agent will be aligned with no minecraft-specific alignment training, then sounds like we're on the same page!

I don't particularly imagine this, because you have to somehow communicate to the AI system what you want it to do, and AI systems don't seem good enough yet to be capable of doing this without some Minecraft specific finetuning. (Though maybe you would count that as Minecraft capabilities? Idk, this boundary seems pretty fuzzy to me.)

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-12-09T23:46:22.289Z · LW(p) · GW(p)

TL;DR: point 3 is my main one.

 

1)

What's an example of alignment work that aims to build an aligned system (as opposed to e.g. checking whether a system is aligned)?

[I'm not sure why you're asking, maybe I'm missing something, but I'll answer]

For example, checking if human values are a "natural abstraction", or trying to express human values in a machine readable format, or getting an AI to only think in human concepts, or getting an AI that is trained on a limited subset of things-that-imply-human-preferences to generalize well out of that distribution. 

I can make up more if that helps? anyway my point was just to say explicitly what parts I'm commenting on and why (in case I missed something)

 

2)

it seems like you think RLHF counts as an alignment technique

It's a candidate alignment technique.

RLHF is sometimes presented (by others) as an alignment technique that should give us hope about AIs simply understanding human values and applying them in out of distribution situations (such as with an ASI).

I'm not optimistic about that myself, but rather than arguing against it, I suggest we could empirically check if RLHF generalizes to an out-of-distribution situation, such as minecraft maybe [LW(p) · GW(p)]. I think observing the outcome here would effect my opinion (maybe it just would work?), and a main question of mine was whether it would effect other people's opinions too (whether they do or don't believe that RLHF is a good alignment technique).

 

3)

because you have to somehow communicate to the AI system what you want it to do, and AI systems don't seem good enough yet to be capable of doing this without some Minecraft specific finetuning. (Though maybe you would count that as Minecraft capabilities? Idk, this boundary seems pretty fuzzy to me.)

I would finetune the AI on objective outcomes like "fill this chest with gold" or "kill that creature [the dragon]" or "get 100 villagers in this area". I'd pick these goals as ones that require the AI to be a capable minecraft player (filling a chest with gold is really hard) but don't require the AI to understand human values or ideally anything about humans at all.

So I'd avoid finetuning it on things like "are other players having fun" or "build a house that would be functional for a typical person" or "is this waterfall pretty [subjectively, to a human]".

Does this distinction seem clear? useful?

This would let us test how some specific alignment technique (such as "RLHF that doesn't contain minecraft examples") generalizes to minecraft

comment by J Bostock (Jemist) · 2024-11-25T10:31:58.820Z · LW(p) · GW(p)

I volunteer to play Minecraft with the LLM agents. I think this might be one eval where the human evaluators are easy to come by.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-25T11:27:01.622Z · LW(p) · GW(p)

:)

 

If you want to try it meanwhile, check out https://github.com/MineDojo/Voyager

comment by Yonatan Cale (yonatan-cale-1) · 2024-11-29T13:57:06.560Z · LW(p) · GW(p)

My own pushback to minecraft alignment evals [LW(p) · GW(p)]:

Mainly, minecraft isn't actually out of distribution, LLMs still probably have examples of nice / not-nice minecraft behaviour.

 

Next obvious thoughts:

  1. What game would be out of distribution (from an alignment perspective)?
  2. If minecraft wouldn't exist, would inventing it count as out of distribution?
    1. It has a similar experience to other "FPS" games (using a mouse + WASD). Would learning those be enough?
    2. Obviously, minecraft is somewhat out of distribution, to some degree
  3. Ideally we'd have a way to generate a game that is out of distribution to some degree that we choose
    1. "Do you want it to be 2x more out of distribution than minecraft? no problem".
    2. But having a game of random pixels doesn't count. We still want humans to have a ~clear[1] moral intuition about it.
  4. I'd be super excited to have research like "we trained our model on games up to level 3 out-of-distribution, and we got it to generalize up to level 6, but not 7. more research needed"
  1. ^

    Moral intuitions such as "don't chop down the tree house in an attempt to get wood", which is the toy example for alignment I'm using here.

Replies from: andrei-alexandru-parfeni
comment by sunwillrise (andrei-alexandru-parfeni) · 2024-11-29T16:59:06.206Z · LW(p) · GW(p)

Mainly, minecraft isn't actually out of distribution, LLMs still probably have examples of nice / not-nice minecraft behaviour.

Is this inherently bad? Many of the tasks that will be given to LLMs (or scaffolded versions of them) in the future will involve, at least to some extent, decision-making and processes whose analogues appear somewhere in their training data. 

It still seems tremendously useful to see how they would perform in such a situation. At worst, it provides information about a possible upper bound on the alignment of these agentized versions: yes, maybe you're right that you can't say they will perform well in out-of-distribution contexts if all you see are benchmarks and performances on in-distribution tasks; but if they show gross misalignment on tasks that are in-distribution, then this suggest they would likely do even worse when novel problems are presented to them.

comment by Milan W (weibac) · 2024-11-25T20:02:08.348Z · LW(p) · GW(p)

A word of caution about interpreting results from these evals:

Sometimes, depending on social context, it's fine to be kind of a jerk if it's in the context of a game. Crucially, LLMs know that Minecraft is a game. Granted, the default Assistant personas implemented in RLHF'd LLMs don't seem like the type of Minecraft player to pull pranks out of their own accord. Still, it's a factor to keep in mind for evals that stray a bit more off-distribution from the "request-assistance" setup typical of the expected use cases of consumer LLMs.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T14:03:11.185Z · LW(p) · GW(p)

+1

I'm imagining an assistant AI by default (since people are currently pitching that an AGI might be a nice assistant). 

If an AI org wants to demonstrate alignment by showing us that having a jerk player is more fun (and that we should install their jerk-AI-app on our smartphone), then I'm open to hear that pitch, but I'd be surprised if they'd make it

comment by Yonatan Cale (yonatan-cale-1) · 2024-11-23T17:31:40.316Z · LW(p) · GW(p)

Opinions on whether it's positive/negative to build tools like Cursor / Codebuff / Replit?

 

I'm asking because it seems fun to build and like there's low hanging fruit to collect in building a competitor to these tools, but also I prefer not destroying the world.


Considerations I've heard:

  1. Reducing "scaffolding overhang" is good, specifically to notice if RSPs should trigger a more advanced RSP level
    1. (This depends on the details/quality of the RSP too)
  2. There are always reasons to advance capabilities, this isn't even a safety project (unless you count... elicitation?), our bar here should be high
  3. Such scaffolding won't add capabilities which might make the AI good at general purpose learning or long term general autonomy. It will be specific to programming, with concepts like "which functions did I look at already" and instructions on how to write high quality tests.
  4. Anthropic are encouraging people to build agent scaffolding, and Codebuff was created by a Manifold cofounder [if you haven't heard about it, see here and here]. I'm mainly confused about this, I'd expect both to not want people to advance capabilities (yeah, Anthropic want to stay in the lead and serve as an example, but this seems different). Maybe I'm just not in sync
Replies from: cata, Amyr, yonatan-cale-1
comment by cata · 2024-11-24T22:03:35.835Z · LW(p) · GW(p)

I'm not confident but I am avoiding working on these tools because I think that "scaffolding overhang" in this field may well be most of the gap towards superintelligent autonomous agents.

If you imagine a o1-level entity with "perfect scaffolding", i.e. it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will, and its training includes the use of those functionalities, it's not completely clear to me that it wouldn't already be able to do a slow self-improvement takeoff by itself, although the cost might be currently practically prohibitive.

I don't think building that scaffolding is a trivial task at all, though.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-25T10:03:16.572Z · LW(p) · GW(p)

I think a simple bash tool running as admin could do most of these:

it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will

 

 

Regarding

and its training includes the use of those functionalities

I think this isn't a crux because the scaffolding I'd build wouldn't train the model. But as a secondary point, I think today's models can already use bash tools reasonably well.

 

it's not completely clear to me that it wouldn't already be able to do a slow self-improvement takeoff by itself

This requires skill in ML R&D which I think is almost entirely not blocked by what I'd build, but I do think it might be reasonable to have my tool not work for ML R&D because of this concern. (would require it to be closed source and so on)

 

Thanks for raising concerns, I'm happy for more if you have them

Replies from: cata
comment by cata · 2024-11-25T18:56:54.578Z · LW(p) · GW(p)

But as a secondary point, I think today's models can already use bash tools reasonably well.

Perhaps that's true, I haven't seen a lot of examples of them trying. I did see Buck's anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).

I don't work in AI so maybe I don't know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn't had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T14:16:31.611Z · LW(p) · GW(p)

Your guesses on AI R&D are reasonable!

Apparently this has been tested extensively, for example:

https://x.com/METR_Evals/status/1860061711849652378

[disclaimers: I have some association with the org that ran that (I write some code for them) but I don't speak for them, opinions are my own]

 

Also, Anthropic have a trigger in their RSP which is somewhat similar to what you're describing, I'll quote part of it:

Autonomous AI Research and Development: The ability to either: (1) Fully automate the work of an entry-level remote-only Researcher at Anthropic, as assessed by performance on representative tasks or (2) cause dramatic acceleration in the rate of effective scaling.

 

Also, in Dario's interview, he spoke about AI being applied to programming.

 

My point is - lots of people have their eyes on this, it seems not to be solved yet, it takes more than connecting an LLM to bash.

Still, I don't want to accelerate this.

comment by Cole Wyeth (Amyr) · 2024-11-25T19:06:24.197Z · LW(p) · GW(p)

I think it's net negative - increases the profitability of training better LLM's. 

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2024-11-26T13:48:16.080Z · LW(p) · GW(p)

Thanks!

 

Opinions about putting in a clause like "you may not use this for ML engineering" (assuming it would work legally) (plus putting in naive technical measures to make the tool very bad for ML engineering) ?

comment by Yonatan Cale (yonatan-cale-1) · 2022-04-02T22:20:47.526Z · LW(p) · GW(p)

Posts I want to write (most of them already have drafts) :

(how do I prioritize? do you want to user-test [LW · GW] any of them?)

  1. Hiring posts
    1. For lesswrong
    2. For Metaculus
  2. Startups
    1. Do you want to start an EA software startup? A few things to probably avoid [I hate writing generic advice like this but it seems to be a reoccurring question]
    2. Announce "EA CTOs" plus maybe a few similar peer groups
  3. Against generic advice - why I'm so strongly against questions like "what should I know if I want to start a startup" [this will be really hard to write but is really close to my heart]. Related:
  4. How to find mentoring for a startup/project (much better than reading generic advice!!!)
  5. "Software Engineers: How to have impact?" - meant to mostly replace the 80k software career review, will probably be co-authored with 80k
  6. My side project: Finding the most impactful tech companies in Israel
  7. Common questions and answers with software developers
    1. How to negotiate for salary (TL;DR: If you don't have an offer from another company, you're playing hard mode.  All the other people giving salary-negotiation advice seem to be ignoring this)
    2. How to build skill in the long term
    3. How to conduct a job search
      1. When focusing on 1-2 companies, like "only Google or OpenAI"
      2. When there are a lot of options
    4. How to compare the companies I'm interviewing for?
    5. Not sure what you are looking for in a job?
    6. I'm not enjoying software development, is it for me? [uncertain I can write this well as text]
    7. So many things to learn, where to start?
    8. I want to develop faster / I want to be a 10x developer / Should I learn speed typing? (TL;DR: No)
    9. Q: I have a super hard bug that is taking me too long panic!
  8. Goodheart Meta: Is posting this week a good idea at all? If more stuff is posted, will less people read my own posts?

How can you help me:

TL;DR: Help me find what there is demand for

Comment / DM / upvote people's comments

Thx!

Replies from: TLW
comment by TLW · 2022-04-03T02:55:54.098Z · LW(p) · GW(p)

Common questions and answers with software developers

How to decide when to move on from a job.

comment by Yonatan Cale (yonatan-cale-1) · 2022-05-22T10:38:29.236Z · LW(p) · GW(p)

Is this an AGI risk?

A company that makes CPUs that run very quickly but don't do matrix multiplication or other things that are important for neural networks.

Context: I know people who work there

Replies from: GeneSmith
comment by GeneSmith · 2022-05-23T01:49:35.894Z · LW(p) · GW(p)

Perhaps, but I'd guess only in a rather indirect way. If there's some manufacturing process that the company invests in improving in order to make their chips, and that manufacturing process happens to be useful for matrix multiplication, then yes, that could contribute.

But it's worth noting how many things would be considered AGI risks by such a standard; basically the entire supply chain for computers, and anyone who works for or with top labs; the landlords that rent office space to DeepMind, the city workers that keep the lights on and the water running for such orgs (and their suppliers), etc.

I wouldn't worry your friends too much about it unless they are contributing very directly to something that has a clear path to improving AI.

Replies from: yonatan-cale-1
comment by Yonatan Cale (yonatan-cale-1) · 2022-05-29T12:07:20.488Z · LW(p) · GW(p)

Thanks

 

May I ask why you think AGI won't contain an important computationally-constrained component which is not a neural network?

Is it because right now neural networks seem to be the most useful thing? (This does not feel reassuring, but I'd be happy for help making sense of it)

Replies from: GeneSmith
comment by GeneSmith · 2022-05-29T21:55:06.565Z · LW(p) · GW(p)

Metaculus has a question about whether the first AGI will be based on deep learning. The crowd estimate right now is at 85%.

I interpret that to mean that improvements to neural networks (particulary on the hardware side) are most likely to drive progress towards AGI.