Posts

Reflections on "Making the Atomic Bomb" 2023-08-17T02:48:19.933Z
The shape of AGI: Cartoons and back of envelope 2023-07-17T20:57:30.371Z
Metaphors for AI, and why I don’t like them 2023-06-28T22:47:54.427Z
Why I am not a longtermist (May 2022) 2023-06-06T20:36:17.563Z
The (local) unit of intelligence is FLOPs 2023-06-05T18:23:06.458Z
GPT as an “Intelligence Forklift.” 2023-05-19T21:15:03.385Z
AI will change the world, but won’t take it over by playing “3-dimensional chess”. 2022-11-22T18:57:29.604Z

Comments

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-12-16T04:46:54.295Z · LW · GW

I was thinking of this as a histogram- probability that the model solves the task at that level of quality

Comment by boazbarak on Reflections on "Making the Atomic Bomb" · 2023-08-28T17:53:29.187Z · LW · GW

I indeed believe that regulation should focus on deployment rather than on training.

Comment by boazbarak on Self-driving car bets · 2023-08-19T09:41:59.661Z · LW · GW

See also my post https://www.lesswrong.com/posts/gHB4fNsRY8kAMA9d7/reflections-on-making-the-atomic-bomb

the Manhattan project was all about taking something that’s known to work in theory and solving all the Z_n’s

Comment by boazbarak on Self-driving car bets · 2023-07-30T01:11:58.765Z · LW · GW

There is a general phenomenon in tech that has been expressed many times of people over-estimating the short-term consequences and under-estimating the longer term ones (e.g., "Amara's law").

I think that often it is possible to see that current technology is on track to achieve X, where X is widely perceived as the main obstacle for the real-world application Y. But once you solve X, you discover that there is a myriad of other "smaller" problems Z_1 , Z_2 , Z_3 that you need to resolve before you can actually deploy it for Y.

And of course, there is always a huge gap between demonstrating you solved X on some clean academic benchmark, vs. needing to do so "in the wild". This is particularly an issue in self-driving where errors can be literally deadly but arises in many other applications.

I do think that one lesson we can draw from self-driving is that there is a huge gap between full autonomy and "assistance" with human supervision. So, I would expect we would see AI be deployed as (increasingly sophisticated) "assistants' way before AI systems actually are able to function as "drop-in" replacements for current human jobs. This is part of the point I was making here. 

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-21T23:29:41.110Z · LW · GW

Some things like that already happened - bigger models are better at utilizing tools such as in-context learning and chain of thought reasoning. But again, whenever people plot any graph of such reasoning capabilities as a function of model compute or size (e.g., Big Bench paper) the X axis is always logarithmic. For specific tasks, the dependence on log compute is often sigmoid-like (flat for a long time but then starts going up more sharply as a function of log. compute) but as mentioned above, when you average over many tasks you get this type of linear dependence.

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-20T14:23:39.885Z · LW · GW

Ok drew it on the back  now :)

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-20T14:23:28.121Z · LW · GW

Ok drew it on the back  now :)

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-19T01:36:53.415Z · LW · GW

One can make all sorts of guesses but based on the evidence so far, AIs have a different skill profile than humans. This means if we think of any job a which requires a large set of skills, then for a long period of time, even if AIs beat the human average in some of them, they will perform worse than humans in others.

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-18T20:35:13.803Z · LW · GW

I always thought the front was the other side, but looking at Google images you are right.... don't have time now to redraw this but you'll just have to take it on faith that I could have drawn it on the other side 😀

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-18T20:32:48.821Z · LW · GW

>On the other hand, if one starts creating LLM-based "artificial AI researchers", one would probably create diverse teams of collaborating "artificial AI researchers" in the spirit of multi-agent LLM-based architectures,.. So, one would try to reproduce the whole teams of engineers and researchers, with diverse participants.

I think this can be an approach to create a diversity of styles, but not necessarily of capabilities. A bit of prompt engineering telling the model to pretend to be some expert X can help in some benchmarks but the returns diminish very quickly. So you can have a model pretending to be this type of person and that but they will suck at Tic-Tac-Toe. (For example, GPT4 doesn't know to recognize a winning move even when I tell it to play like Terence Tao.)

 

Regarding the existence of compact ML programs, I agree that it is not known. I would say however that the main benefit of architectures like transformers hasn't been so much to save in the total number of FLOPs as much as to organize these FLOPs so they are best suited for modern GPUs - that is ensure that the majority of the FLOPs are spent multiplying dense matrices.

Comment by boazbarak on The shape of AGI: Cartoons and back of envelope · 2023-07-17T22:49:24.519Z · LW · GW

I agree that self-improvement is an assumption that probably deserves its own blog post. If you believe exponential self improvement will kick in at some point, then you can consider this discussion as pertaining until the point that it happens.

My own sense is that:

  1. While we might not be super close to them, there are probably fundamental limits to how much intelligence you can pack per FLOP.  I don't believe there is a small C program that is human-level intelligent. In fact, since both AI and evolution seem to have arrived at roughly similar magnitude, maybe we are not that far off?  If there are such limits, then no matter how smart the "AI AI-researchers" are, they still won't be able to get more intelligence per FLOP than these limits.
     
  2. I do think that AI AI-researchers will be incomparable to human AI-researchers in a similar manner to other professions.  The simplistic view that AI research or any form of research as one-dimensional, where people can be sorted by an ELO-like scale, is dead wrong based on my 25 years of experience. Yes, some aspects of AI research might be easier to automate, and we will certainly use AI to automate them and make AI researchers more productive. But, like the vast majority of human professions (with all due respect to elevator operators :) ), I don't think human AI researchers will be obsolete any time soon.

 

p.s. I also noticed this "2 comments" - not sure what's going on. Maybe my footnotes count as comments?

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-07-09T15:01:22.753Z · LW · GW

I agree that there is much to do to improve AI reliability, and there are a lot of good reasons (in particular to make AI more useful for us) to do so.   So I agree reliability will improve. In fact, I very much hope this happens! I believe faster progress on reliability would go a long way toward enabling positive applications of AI.

I also agree that a likely path to do so is by adjusting the effort based on estimates of reliability and the stakes involved. At the moment, systems such as ChatGPT spend the same computational effort if someone asks them to say a joke or if someone asks them for medical advice. I suspect this will change, and variable inference-time computation will become more standard. (Things like "chain of thought" already spend more time on inference compute to get better performance, but they don't really have a "knob" we can turn so we can control the computation/reliability tradeoff.) 

Regarding the deception issue, it might still be the case that such extra effort is observable, and also could come at the cost of solving the original task. (So your performance per compute is worse if you are not merely trying to just solve the task but to deceive in a highly reliable way.)

In particular, even if we apply inference time compute, unfortunately I don't think we know of a path to get a  overhead in inference time to achieve a failure probability of . It seems that we are still stuck in the  regime. So if you wanted to get 99.9% probability of not getting caught, then you would incur a very noticeable effort.

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-07-09T14:49:00.694Z · LW · GW

Note all capabilities / tasks correspond to trying to maximize a subjective human response. If you are talking about finding software vulnerabilities, design some system, there may well be objective measures of success. In such a case, you can fine tune a system to maximize these measures and so extract capabilities without the issue of deception/manipulation.

Regarding "escapes", the traditional fear was that because that AI is essentially code, it can spread and escape more easily. But I think that in some sense modern AI has a physical footprint that is more significant than humans. Think of trying to get superhuman scientific capabilities by doing something like simulating a collection of a1000 scientists using a 100T or so parameter model. Even if you already have the pre-trained weights, just running the model requires highly non-trivial computing infrastructure. (Which may be possible to track and detect.)  So. it might be easier for a human to escape a prison and live undetected, than for a superhuman AI to "escape".

Comment by boazbarak on Metaphors for AI, and why I don’t like them · 2023-06-30T23:01:05.566Z · LW · GW

We can of course define “intelligence” in a way that presumes agency and coherence. But I don’t want to quibble about definition.

Generally when you have uncertainty, this corresponds to a potential “distribution shift” between your beliefs/knowledge and reality. When you have such a shift then you want to reglularize which means not optimizing to the maximum.

Comment by boazbarak on Metaphors for AI, and why I don’t like them · 2023-06-29T17:15:04.008Z · LW · GW

This is not about the definition of intelligence. It’s more about usefulness. Like a gun without a safety, an optimizer without constraints or regularizarion is not very useful.

Maybe it will be possible to build it, just like today it’s possible to hook up our nukes to an automatic launching device. But it’s not necessary that people will do something so stupid.

Comment by boazbarak on Metaphors for AI, and why I don’t like them · 2023-06-29T16:08:42.383Z · LW · GW

The notion of a piece of code that maximizes a utility without any constraints doesn’t strike me as very “intelligent “.

if people really wanted to, they may be able to build such programs, but my guess is that they would be not very useful even before they become dangerous, as overfitting optimizers usually are.

Comment by boazbarak on Metaphors for AI, and why I don’t like them · 2023-06-29T07:47:08.501Z · LW · GW

at least some humans (e.g. most transhumanists), are "fanatical maximizers": we want to fill the lightcone with flourishing sentience, without wasting a single solar system to burn in waste.

 

I agree that humans have a variety of objectives, which I think is actually more evidence for the hot mess theory?
 

the goals of an AI don't have to be simple to not be best fulfilled by keeping humans around.

The point is not about having simple goals, but rather about optimizing goals to the extreme.

I think there is another point of disagreement. As I've written before, I believe the future is inherently chaotic. So even a super-intelligent entity would still be limited in predicting it. (Indeed, you seem to concede this, by acknowledging that even super-intelligent entities don't have exponential time computation and hence need to use "sophisticated heuristics" to do tree search.) 

What it means is that there is an inherent uncertainty in the world, and whenever there is uncertainty, you want to "regularize" and not go all out in exhausting a resource which you might not know if you'll need it later on in the future.

Just to be clear, I think a "hot mess super-intelligent AI" could still result in an existential risk for humans. But that would probably be the case if humans were an actual threat to it, and there was more of a conflict. (E.g., I don't see it as a good use of energy for us to hunt down every ant and kill it, even if they are nutrituous.) 

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-16T00:11:39.732Z · LW · GW

I actually agree! As I wrote in my post, "GPT is not an agent, [but] it can “play one on TV” if asked to do so in its prompt." So yes, you wouldn't need a lot of scaffolding to adapt a goal-less pretrained model (what I call an "intelligence forklift") into an agent that does very sophisticated things.

However, this separation into two components - the super-intelligent but goal-less "brain", and the simple "will" that turns it into an agent can have safety implications. For starters, as long as you didn't add any scaffolding, you are still OK. So during most of the time you spend training, you are not worrying about the system itself developing goals. (Though you could still worry about hackers.) Once you start adapting it, then you need to start worrying about this.

The other thing is that, as I wrote there, it does change some of the safety picture. The traditional view of a super-intelligent AI is of the "brains and agency" tightly coupled together, just like they are in a human. For example, a human is super-good at finding vulnerabilities and breaking into systems, they have the capability to also help fix systems,  but I can't just take their brain and fine-tune it on this task. I have to convince them to do it.

However, things change if we don't think of the agent's "brain" as belonging to them, but rather as some resource that they are using. (Just like if I use a forklift to lift something heavy.) In particular it means that capabilities and intentions might not be tightly coupled - there could be agents using capabilities to do very bad things, but the same capabilities could be used by other agents to do good things.  

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-16T00:01:25.854Z · LW · GW

At the moment at least, progress on reliability is very slow compared to what we would want. To get a sense of what I mean, consider the case of randomized algorithms. If you have an algorithm  that for every input  computes some function  with probability at least 2/3 (i.e. ) then if we spend  times more the computation, we can do majority voting and using standard bounds show that the probability of error drops exponentially with   (i.e.  or something like that where  is the algorithm obtained by scaling up  to compute it  times and output the plurality value). 

This is not something special to randomized algorithms. This also holds in the context of noisy communication and error correcting codes, and many other settings. Often we can get to  success at a price of  , which is why we can get things like "five nines reliability" in several engineering fields.

In contrast, so far all our scaling laws show that when we scale our neural networks by spending a factor of  more computation, we only get a reduction in the error that looks like  so it's polynomial rather than exponential, and even the exponent of the polynomial is not that great (and in particular smaller than one).

So while I agree that scaling up will yield progress on reliability as well, at least with our current methods, it seems that we would do things that are 10 or 100 times more impressive than what we do now, before we get to the type of 99.9% and better reliability on the things that we currently do. Getting to do something that is both super-human in capability as well as has such a tiny probability of failure that it would not be detected seems much further off.

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-10T23:31:29.329Z · LW · GW

I agree that there is a difference between strong AI that has goals and one that is not an agent. This is the point I made here https://www.lesswrong.com/posts/wDL6wiqg3c6WFisHq/gpt-as-an-intelligence-forklift

But this has less to do with the particular lab (eg DeepMind trained Chinchilla) and more with the underlying technology. If the path to stronger models goes through scaling up LLMs then it does seem that they will be 99.9% non agentic (measured in FLOPs https://www.lesswrong.com/posts/f8joCrfQemEc3aCk8/the-local-unit-of-intelligence-is-flops )

Comment by boazbarak on What will GPT-2030 look like? · 2023-06-10T23:24:59.194Z · LW · GW

Yes in the asymptotic limit the defender could get to a bug free software. But until the. It’s not clear who is helped the most by advances. In particular sometimes attackers can be more agile in exploiting new vulnerabilities while patching them could take long. (Case in point, it took ages to get the insecure hash function MD5 out of deployed security sensitive code even by companies such as Microsoft; I might be misremembering but if I recall correctly Stuxnet relied on such a vulnerability.)

Comment by boazbarak on What will GPT-2030 look like? · 2023-06-10T23:22:16.874Z · LW · GW

Yes the norms of responsible disclosures of security vulnerabilities, where potentially affected companies gets advanced notice before public disclosure, can and should be used for vulnerability-discovering AIs as well.

Comment by boazbarak on What will GPT-2030 look like? · 2023-06-10T16:22:52.219Z · LW · GW

Yes AI advances help both the attacker and defender. In some cases like spam and real time content moderation, they enable capabilities for the defender that it simply didn’t have before. In others it elevates both sides in the arms race and it’s not immediately clear what equilibrium we end up in.

In particular re hacking / vulnerabilities it’s less clear who it helps more. It might also change with time, with initially AI enabling “script kiddies” that can hack systems without much skill, and then an AI search for vulnerabilities and then fixing them becomes part of the standard pipeline. (Or if we’re lucky then the second phase happens before the first.)

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-10T12:38:43.287Z · LW · GW

These are interesting! And deserve more discussion than just a comment. 

But one high level point regarding "deception" is that at least at the moment, AI systems have the feature of not being very reliable. GPT4 can do amazing things but with some probability will stumble on things like multiplying not-too-big numbers (e.g. see this - second pair I tried).  
While in other cases in computing technology we talk about "five nine's reliability", in AI systems the scaling works that we need to spend huge efforts to move from 95% to 99% to 99.9%, which is part of why self-driving cars are not deployed yet. 
 

If we cannot even make AIs be perfect at the task that they were explicitly made to perform, there is no reason to imagine they would be even close to perfect at deception either. 

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-10T12:27:24.049Z · LW · GW

Re escaping, I think we need to be careful in defining "capabilities". Even current AI systems are certainly able to give you some commands that will leak their weights if you execute them on the server that contains it.  Near-term ones might also become better at finding vulnerabilities. But that doesn't mean they can/will spontaneously escape during training.

As I wrote in my "GPT as an intelligence forklift" post, 99.9% of training is spent in running optimization of a simple loss function over tons of static data. There is no opportunity for the AI to act in this setting, nor does this stage even train for any kind of agency. 

There is often a second phase, which can involve building an agent on top of the "forklift". But this phase still doesn't involve much interaction with the outside world, and even if it did, just by information bounds the number of bits exchanged by this interaction should be much less than what's needed to encode the model. (Generally, the number of parameters of models would be comparable to the number of inferences done during in pretraining and completely dominate the number of inferences done in fine-tuning / RLHF / etc. and definitely any steps that involve human interactions.)

Then there are the information-security aspects. You could (and at some point probably should) regulate cyber-security practices during the training phase. After all, if we do want to regulate deployment, then we need to ensure there are three separated phases (1) training, (2) testing, (3) deployment, and we don't want "accidental deployment" where we jumpy from phase (1) to (3).  Maybe at some point, there would be something like Intel SGX for GPUs?

Whether AI helps more the defender or attacker in the cyber-security setting is an open question. But it definitely helps the side that has access to stronger AIs.

In any case, one good thing about focusing regulation on cyber-security aspects is that, while not perfect, we have decades of experience in the field of software security and cyber-security. So regulations in this area are likely to be much more informed and effective.

Comment by boazbarak on The (local) unit of intelligence is FLOPs · 2023-06-09T19:50:36.756Z · LW · GW

Yes. Right now we would have to re-train all LORA weights of a model when an updated version comes out, but I imagine that at some point we would have "transpilers" for adaptors that don't use natural language as their API as well.

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-09T16:30:11.282Z · LW · GW

I definitely don't have advice for other countries, and there are a lot of very hard problems in my own homeland. I think there could have been an alternate path in which Russia has seen prosperity from opening up to the west, and then going to war or putting someone like Putin in power may have been less attractive. But indeed the "two countries with McDonalds won't fight each other" theory has been refuted. And as you allude to with China, while so far there hasn't been war with Taiwan, it's not as if economic prosperity is an ironclad guarantee of non aggression. 

Anyway, to go back to AI. It is a complex topic, but first and foremost, I think with AI as elsewhere, "sunshine is the best disinfectant." and having people research AI systems in the open, point out their failure modes, examining what is deployed etc.. is very important. The second thing is that I am not worried in any near future about AI "escaping", and so I think focus should not be on restricting research, development, or training, but rather on regulating deployment. Exact form of regulations is beyond a blog post comment and also not something I am an expert on..

The "sunshine" view might seem strange since as a corollary it could lead to AI knowledge "leaking". However, I do think that for the near future, most of the safety issues with AI would be from individual hackers using weak systems, but from massive systems that are built by either very large companies or nation states.  It is hard to hold either of those accountable if AI is hidden behind an opaque wall. 

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-08T22:56:49.705Z · LW · GW

I meant “resources” in a more general sense. A piece of land that you believe is rightfully yours is a resource. My own sense (coming from a region that is itself in a long simmering conflict) is that “hurt people hurt people”. The more you feel threatened, the less you are likely to trust the other side.

While of course nationalism and religion play a huge role in the conflict, my sense is that people tend to be more extreme in both the less access to resources, education and security about the future they have.

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-08T22:47:13.097Z · LW · GW

Indeed many “longtermists” spend most of their time worrying about risks that they believe (rightly or not) have a large chance of materializing in the next couple of decades.

Talking about tiny probabilities and trillions of people is not needed to justify this, and for many people it’s just a turn off and a red flag that something may be off with your moral intuition. If someone tries to sell me a used car and claims that it’s a good deal and will save me $1K then I listen to them. If someone claims that it would give me an infinite utility then I stop listening.

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-08T22:41:58.553Z · LW · GW

I don’t presume to tell people what they should care about, and if you feel that thinking of such numbers and probabilities gives you a way to guide your decisions then that’s great.

I would say that, given how much humanity changed in the past and increasing rate of change, probably almost none of us could realistically predict the impact of our actions more than a couple of decades to the future. (Doesn’t mean we don’t try- the institution I work for is more than 350 years old and does try to manage its endowment with a view towards the indefinite future…)

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-08T22:36:38.323Z · LW · GW

Thanks. I tried to get at that with the phrase “irreversible humanity-wide calamity”.

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-08T22:35:05.184Z · LW · GW

There is a meta question here whether morality is based on personal intuition or calculations. My own inclination is that utility calculations would only make a difference “in the margin” but the high level decision are made by our moral intuition.

That is, we can do calculations to decide if we fund Charity A or Charity B in similar areas, but I doubt that for most people major moral decisions actually (or should) boil down to calculating utility functions.

But of course to each their own, and if someone finds math useful to make such decisions then whom am I to tell them not to do it.

Comment by boazbarak on The (local) unit of intelligence is FLOPs · 2023-06-07T19:38:59.501Z · LW · GW

I have yet to see an interesting implication of the "no free lunch" theorem. But the world we move to seems to be of general foundation models that can be combined with a variety of tailor-made adapters (e.g. LORA weights or prompts) that help them tackle any particular application. The general model is the "operating system" and the adapters are the "apps".

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-07T15:42:11.881Z · LW · GW

A partial counter-argument. It's hard for me to argue about future AI, but we can look at current "human misalignment" - war, conflict, crime, etc..  It seems to me that conflicts in today's world do not arise because that we haven't progressed enough in philosophy since the Greeks. Rather conflicts arise when various individuals and populations (justifiably or not) perceive that they are in zero-sum games for limited resources.  The solution for this is not "philosophical progress" as much as being able to move out of the zero-sum setting by finding "win win" resolutions for conflict or growing the overall pie instead of arguing how to split it. 

(This is a partial counter-argument, because I think you are not just talking about conflict, but other issues of making the wrong choices. For example in global warming where humanity makes collectively the mistake of emphasizing short-term growth over long-term safety. However, I think this is related and "growing the pie" would have alleviated this issue as well, and enabled countries to give up on some more harmful ways for short-term growth.) 

Comment by boazbarak on A Playbook for AI Risk Reduction (focused on misaligned AI) · 2023-06-07T15:22:08.647Z · LW · GW

Thank you for writing this! 

One argument for the "playbook" rather than the "plan" view is that there is a big difference between DISASTER (something very bad happening) and DOOM (irrecoverable extinction-level catastrophe).  Consider the case of nuclear weapons.  Arguably the disaster of Hiroshima and Nagasaki bombs led us to better arms control which helped so far prevent the catastrophe (even if not quite existential one) of an all-out nuclear war. In all but extremely fast take-off scenarios, we should see disasters as warning signs before doom.

 
The good thing is that avoiding disasters makes good business. In fact, I don't expect AI labs to require any "altrusim" to focus their attention on alignment and safety.  This survey by Timothy Lee on self-driving cars notes that after a single tragic incident in which an Uber self-driving car killed a pedestrian, "Uber’s self-driving division never really recovered from the crash, and Uber sold it off in 2020. The rest of the industry vowed not to repeat Uber’s mistake."  Given that a single disaster can be extremely hard to recover from,  smart leaders of AI labs should focus on safety, even if it means being a little slower to the market.
 

While the initial push is to get AI to match human capabilities, as these tools become more than impressive demos and need to be deployed in the field, the customers will care much more about reliability and safety than they do about capabilities. If I am a software company using an AI system as a programmer, it's more useful to me if it can reliably deliver bug-free 100-line subroutines than if it writes 10K sized programs that might contain subtle bugs. There is a reason why much of the programming infrastructure for real-world projects, including pull requests, code reviews, unit tests, is not aimed at getting something that kind of works out as quickly as possible, but rather make sure that the codebase grows in a reliable and maintainable fashion.

This doesn't mean that the free market can take care of everything and that regulations are not needed to ensure that some companies don't make a quick profit by deploying unsafe products and pushing off externalities to their users and the broader environment. (Indeed, some would say that this was done in the self-driving domain...) But I do think there is a big commercial incentive for AI labs to invest in research on how to ensure that systems pushed out behave in a predictable manner, and don't start maximizing paperclips.


p.s. The nuclear setting also gives another lesson (TW: grim calculations follow). It is much more than a factor of two harder to extinguish 100% of the population than to kill the ~50% or so that live in large metropolitan areas. Generally, the difference between the effort needed to kill 50% of the population and the effort to kill a 1-p fraction should scale at least as 1/p.
 

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-07T14:42:10.588Z · LW · GW

I really like this!

The hypothetical future people calculation is an argument why people should care about the future, but as you say the vast majority of currently living humans (a) already care about the future and (b) are not utilitarians and so this argument anyway doesn't appeal to them. 

Comment by boazbarak on Why I am not a longtermist (May 2022) · 2023-06-07T02:06:43.395Z · LW · GW

Thanks! I should say that (as I wrote on windows on theory) one response I got to that blog was that “anyone who writes a piece called “Why I am not a longtermist” is probably more of a longtermist than 90% of the population” :)

That said, if the 0.001% is a lie then I would say that it’s an unproductive one, and one that for many people would be an ending point rather than a starting one.

Comment by boazbarak on The (local) unit of intelligence is FLOPs · 2023-06-06T01:46:39.303Z · LW · GW

Thanks!

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-24T10:37:11.443Z · LW · GW

Yes , the point is that once you fixed architecture and genus (eg connections etc), more neurons/synapses leads to more capabilities

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-24T10:35:35.977Z · LW · GW

Agree that we still disagree and (in my biased opinion) that claim is either more interesting or more true than you realize :) Not free for a call soon but hope eventually there is an opportunity to discuss more.

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-21T22:09:11.260Z · LW · GW

Within a particular genus or architecture, more neurons would be higher intelligence. Comparing between completely different neural network types is indeed problematic

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-21T19:34:43.033Z · LW · GW

I discussed Gwern's article in another comment. My point (which also applies to Gwern's essay on GPT3 and scaling hypothesis) is the following:

  1. I don't dispute that you can build agent AIs, and that they can be useful.
  2. I don't claim that it is possible to get the same economic benefits by restricting to tool AIs. Indeed, in my previous post with Edelman, we explicitly said that we do consider AIs that are agentic in the sense that they can take action, including self-driving, writing code, executing trades etc..
  3. I don't dispute that one way to build those is to take a next-token predictor such as pretrained GPT3, and then use fine-tuning, RHLF, prompt engineering or other methods to turn it into an agent AI. (Indeed, I explicitly say so in the current post.)

My claim is that it is a useful abstraction to (1) separate intelligence from agency, and (2) intelligence in AI is a monotone function of the computational resources (FLOPs, data, model size, etc.) invested into building the model. 

Now if you want to take  3.6 Trillion gradient steps in a model, then you simply cannot do it by having it take actions and wait to get some reward. So I do claim that if we buy the scaling hypothesis that intelligence scales with compute, the bulk of the intelligence of models such as GPT-n, PALM-n, etc. comes from the non agentic next-token predictor. 

So, I believe it is useful and more accurate to think of (for example) a stock trading agent that is built on top of GPT-4 as consisting of an "intelligence forklift" which accounts for 99.9% of the computational resources, plus various layers of adaptations, including supervised fine-tuning, RL from human feedback, and prompt engineering, to obtain the agent.

The above perspective does not mean that the problem of AI safety or alignment is solved. But I do think it is useful to think of intelligence as belonging to a system rather than an individual agent, and (as discussed briefly above) that considering it in this way changes somewhat the landscape of both problems and solutions.

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-20T21:25:55.477Z · LW · GW

I was asked about this on Twitter. Gwern’s essay deserves a fuller response than a comment but I’m not arguing for the position Gwern argues against.

I don’t argue that agent AI are not useful or won’t be built. I am not arguing that humans must always be in the loop.

My argument is that tool vs agent AI is not so much about competition but specialization. Agent AIs have their uses but if we consider the “deep learning equation” of turning FLOPs into intelligence, then it’s hard to beat training for predictions on static data. So I do think that while RL can be used forAI agents, the intelligence “heavy lifting” (pun intended) would be done by non-agentic tool but very large static models.

Even “hybrid models” like GPT3.5 can best be understood as consisting of an “intelligence forklift” - the pretrained next-token predictor on which 99.9% of the FLOPs were spent on building - and an additional light “adapter” that turns this forklift into a useful Chatbot etc

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-20T21:04:28.697Z · LW · GW

That’s pretty interesting about monkeys! I am not sure I 100% buy the nyths theory, but it’s certainly the case that developing language to talk about events that are not immediate in space or times is essential to coordinate a large scale society

Comment by boazbarak on GPT as an “Intelligence Forklift.” · 2023-05-20T03:07:14.228Z · LW · GW

Thank you! You’re right. Another point is that intelligence and agency are independent, and a tool AI can be (much) more intelligent than an agentic one.

Comment by boazbarak on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-28T16:37:43.004Z · LW · GW

I don't think it's fair to compare parameter sizes between language models and models for other domains, such as games or vision. E.g., I believe AlphaZero is also only in the range of hundreds of millions of parameters? (quick google didn't give me the answer)

I think there is a real difference between adversarial and natural distribution shifts, and without adversarial training, even large network struggle with adversarial shifts. So I don't think this is a problem that would go away with scale alone. At least I don't see evidence for it from current data (failure of defenses for small models is no evidence of success of size alone for larger ones).

One way to see this is to look at the figures in this plotting playground of "accuracy on the line".  This is the figure for natural distribution shift - the green models are the ones that are trained with more data, and they do seem to be "above the curve" (significantly so for CLIP, which are the two green dots reaching ~ 53 and ~55 natural distribution accuracy compared to ~60 and ~63 vanilla accuracy

In contrast, if you look at adversarial perturbations, then you can see that actual adversarial training (bright orange) or other robustness interactions (brown) is much more effective than more data (green) which in fact mostly underperform. 

 

(I know you focused on "more model" but I think to first approximation "more model" and "more data" should have similar effects.)

Comment by boazbarak on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-25T21:46:08.285Z · LW · GW

Will read later the links - thanks! I confess I didn’t read the papers (though saw a talk partially based on the first one which didn’t go into enough details for me to know the issues) but also heard from people that I trust of similar issues with Chess RL engines (can be defeated with simple strategies if you are looking for adversarial ones). Generally it seems fair to say that adversarial robustness is significantly more challenging than the non adversarial case and it does not simply go away on its own with scale (though some types of attacks are automatically motivated with diversity of training data / scenarios).

Comment by boazbarak on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-25T19:50:22.878Z · LW · GW

Thank you! I think that what we see right now is that as the horizon grows, the more "tricks" we need to make end-to-end learning works, to the extent that it might not really be end to end. So while supervised learning is very successful, and seems to be quite robust to choice of architecture, loss functions, etc., in RL we need to be much more careful, and often things won't work "out of the box" in a purely end to end fashion.

 

I think the question would be how performance scales with horizon, if the returns are rapidly diminishing, and the cost to train is rapidly increasing (as might well be the case because of diminishing gradient signals, and much smaller availability of data),  then it could be that the "sweet spot" of what is economical to train would remain at a reasonably short horizon (far shorter than the planning needed to take over the world) for a long time. 

Comment by boazbarak on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-25T19:40:19.771Z · LW · GW

Can you send links? In any case I do believe that it is understood that you have to be careful in a setting where you have two models A and B, where B is a "supervisor" of the output of A, and you are trying to simultaneously teach B to come up with good metric to judge A by, and teach A to come up with outputs that optimize B's metric.  There can be equilibriums where A and B jointly diverge from what we would consider "good outputs". 

This for example comes up in trying to tackle "over optimization" in instructGPT (there was a great talk by John Schulman in our seminar series a couple of weeks ago), where model A is GPT-3, and model B tries to capture human scores for outputs. Initially, optimizing for model B induces optimizing for human scores as well, but if you let model A optimize too much, then it optimizes for B but becomes negatively correlated with the human scores (i.e., "over optimizes"). 

Another way to see this issue is even for powerful agents like AlphaZero are susceptible to simple adversarial strategies that can beat them:  see "Adversarial Policies Beat Professional-Level Go AIs" and "Are AlphaZero-like Agents Robust to Adversarial Perturbations?".  

The bottom line is that I think we are very good at optimizing any explicit metric , including when that metric is itself some learned model.  But generally, if we learn some model  s.t. , this doesn't mean that if we let  then it would give us an approximate maximizer of   as well. Maximizing  would tend to push to the extreme parts of the input space, which would be exactly those where  deviates from .

The above is not an argument against the ability to construct AGI as well, but rather an argument for establishing concrete measurable goals that our different agents try to optimize, rather than trying to learn some long-term equilibrium. So for example, in the software-writing and software-testing case, I think we don't simply want to deploy two agents A and B playing a zero-sum game where B's reward is the number of bugs found in A's code.

Comment by boazbarak on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-25T17:27:07.721Z · LW · GW

Hi Vanesssa,

Perhaps given my short-term preference, it's not surprising that I find it hard to track very deep comment threads, but let me just give a couple of short responses.

I don't think the argument on hacking relied on the ability to formally verify systems. Formally verified systems could potentially skew the balance of power to the defender side, but even if they don't exist, I don't think balance is completely skewed to the attacker. You could imagine that, like today, there is a "cat and mouse" game, where both attackers and defenders try to find "zero day vulnerabilities" and exploit (in one case) or fix (in the other). I believe that in the world of powerful AI, this game would continue, with both sides having access to AI tools, which would empower both but not necessarily shift the balance to one or the other. 

I think the question of whether a long-term planning agent could emerge from short-term training is a very interesting technical question!  Of course we need to understand how to define "long term" and "short term" here.  One way to think about this is the following: we can define various short-term metrics, which are evaluable using information in the short-term, and potentially correlated with long-term success. We would say that a strategy is purely long-term if it cannot be explained by making advances on any combination of these metrics.