sjadler's Shortform
post by sjadler · 2024-12-23T18:13:35.118Z · LW · GW · 16 commentsContents
16 comments
16 comments
Comments sorted by top scores.
comment by sjadler · 2025-02-09T09:45:49.709Z · LW(p) · GW(p)
It’s interesting to me that the big AI CEOs have largely conceded that AGI/ASI could be extremely dangerous (but aren’t taking sufficient action given this view IMO), as opposed to them just denying that the risk is plausible. My intuition is that the latter is more strategic if they were just trying to have license to do what they want. (For instance, my impression is that energy companies delayed climate action pretty significantly by not yielding at first on whether climate change is even a real concern.)
I guess maybe the AI folks are walking a strategic middle ground? Where they concede there could be some huge risk, but then also sometimes say things like ‘risk assessment should be evidence-based,’ with the implication that current concerns aren’t rigorous. And maybe that’s more strategic than either world above?
But really it seems to me like they’re probably earnest in their views about the risks (or at least once were earnest). And so, political action that treats their concern as disingenuous is probably wrongheaded, as opposed to modeling them as ‘really concerned but facing very constrained useful actions’.
Replies from: Thane Ruthenis, Viliam, CapResearcher↑ comment by Thane Ruthenis · 2025-02-09T15:28:42.807Z · LW(p) · GW(p)
You're not accounting for enemy action. They couldn't have been sure, at the onset, how successful the AI Notkilleveryoneism faction will be at raising alarm, and in general, how blatant the risks will become to the outsiders as capabilities progress. And they have been intimately familiar with the relevant discussions, after all.
So they might've overcorrected, and considered that the "strategic middle ground" would be to admit the risk is plausible (but not as certain as the "doomers" say), rather than to deny it (which they might've expected to become a delusional-looking position in the future, so not a PR-friendly stance to take).
Or, at least, I think this could've been a relevant factor there.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2025-02-09T17:27:35.134Z · LW(p) · GW(p)
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
Like, possibly the EAs could have crea ed a widespread vibe that building AGI is a cartoon evil thing to do, sort of the way many people think of working for a tobacco company or an oil company.
Then, after ChatGPT, OpenAI was a much bigger fish than the EAs or the rationalists, and he began taking moves to extricate himself from them.
Replies from: T3t↑ comment by RobertM (T3t) · 2025-02-10T07:32:20.843Z · LW(p) · GW(p)
Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015. This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can't find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn't at the FLI conference himself. (Also, it'd surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th[3], for OpenAI to go from "conceived of" to "existing".)
- ^
That of the famous "Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity." quote.
- ^
The FLI conference.
- ^
OpenAI's public founding.
↑ comment by Eli Tyre (elityre) · 2025-02-10T17:57:29.798Z · LW(p) · GW(p)
Is this taken to be a counterpoint to my story above? I'm not sure exactly how it's related.
Replies from: T3t↑ comment by RobertM (T3t) · 2025-02-11T04:44:44.329Z · LW(p) · GW(p)
Yes:
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you're describing. That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2025-02-12T03:49:57.599Z · LW(p) · GW(p)
I don't dispute that he never had any genuine concern. I guess that he probably did have genuine concern (though not necessarily that that was his main motivation for founding OpenAI).
↑ comment by Viliam · 2025-02-09T12:25:06.073Z · LW(p) · GW(p)
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
I imagine, someone might figure out a way to make the AI much cheaper, maybe by sacrificing the generality. For example, this probably doesn't make sense, but would it be possible to train an LLM only based on Python code (as opposed to the entire internet) and produce an AI that is only a Python code autocomplete? If it could be 1000x cheaper, you could make a startup without having to build a new power plant for you. Imagine that you add some special sauce to the algorithm (for example the AI will always internally write unit tests, which will visibly increase the correctness of the generated code; or it will be some combination of the ancient "expert system" approach with the new LLM approach, for example the LLM will train the expert system and then the expert system will provide feedback for the LLM), so you would be able to sell your narrow AI even when more general AIs are available. And once you start selling it, you get an income, which means you can expand the functionality.
It is better to have a consensus that such things are too dangerous to leave in hands of startups that can't already lobby the government.
Hey, I am happy that the CEOs admit that the dangers exist. But if they are only doing it to secure their profits, it will probably warp their interpretations of what exactly the risks are, and what is a good way to reduce them.
Replies from: sjadler↑ comment by sjadler · 2025-02-09T12:38:39.129Z · LW(p) · GW(p)
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?
↑ comment by CapResearcher · 2025-02-09T12:12:49.669Z · LW(p) · GW(p)
To me, this seems consistent with just maximizing shareholder value.
Salaries and compute are the largest expenses at big AI firms, and "being the good guys" lets you get the best people at significant discounts. To my understanding, one of the greatest early successes of OpenAI was hiring great talent for cheap because they were "the non-profit good guys who cared about safety". Later, great people like John Schulman left OpenAI for Anthropic because of his "desire to deepen my focus on AI alignment".
As for people thinking you're a potential x-risk, the downsides seem mostly solved by "if we didn't do it somebody less responsible would". AI safety policy interventions could also give great moats against competition, especially for the leading firm(s). Furthermore, much of the "AI alignment research" they invest in prevents PR disasters (terrorist used ChatGPT to invent dangerous bio-weapon) and most of the "interpretability" they invest in seems pretty close to R&D which they would invest in anyway to improve capabilities.
This might sound overly pessimistic. However, it can be viewed positively: there is significant overlap between the interests of big AI firms and the AI safety community.
Replies from: sjadler↑ comment by sjadler · 2025-02-09T12:48:45.776Z · LW(p) · GW(p)
To me, this seems consistent with just maximizing shareholder value. … "being the good guys" lets you get the best people at significant discounts.
This is pretty different from my model of what happened with OpenAI or Anthropic - especially the latter, where the founding team left huge equity value on the table by departing (OpenAI’s equity had already appreciated something like 10x between the first MSFT funding round and EOY 2020, when they departed).
And even for Sam and OpenAI, this would seem like a kind of wild strategy for pursuing wealth for someone who already had the network and opportunities he had pre-OpenAI?
Replies from: CapResearcher↑ comment by CapResearcher · 2025-02-09T13:30:24.209Z · LW(p) · GW(p)
With the change to for-profit and Sam receiving equity, it seems like the strategy will pay off. However, this might be hindsight bias, or I might otherwise have a too simplified view.
comment by sjadler · 2024-12-23T18:13:35.319Z · LW(p) · GW(p)
I believe we should view AGI as a ratio of capability to resources, rather than simply asking how AI's abilities compare to humans'. This view is becoming more common, but is not yet common enough.
When people discuss AI's abilities relative to humans without considering the associated costs or time, this is like comparing fractions by looking only at the numerators.
In other words, AGI has a numerator (capability): what the AI system can achieve. This asks questions like: For this thing that a human can do, can AI do it too? How well can AI do it? (For example, on a set of programming challenges, how many can the AI solve? How many can a human solve?)
But also importantly, AGI should account for the denominator: how many resources are required to achieve its capabilities. Commonly, this resource will be a $-cost or an amount of time. This asks questions like "What is the $-cost of getting this performance?" or "How long does this task take to complete?".
I claim that an AI system might fail to qualify as AGI if it lacks human-level capabilities, but it could also fail by being wildly inefficient compared to a human. Both the numerator and denominator matter when evaluating these ratios.
A quick example of why focusing solely on the capability (numerator) is insufficient:
- Imagine an AI software engineer that can do most tasks human engineers do, but at 100–1000× the cost of a human.
- I expect that lots of the predictions about "AGI" would not come due until (at least) that cost comes down substantially so that AI >= human on a capability-per-dollar basis.
- For instance, it would not make sense to directly substitute AI for human labor at this ratio - but perhaps it does make sense to buy additional AI labor, if there were extremely valuable tasks for which human labor is the bottleneck today.
The AGI-as-ratio concept has long been implicit in some labs' definitions of AGI - for instance, OpenAI describes AGI as "a highly autonomous system that outperforms humans at most economically valuable work". Outperforming humans economically does seem to imply being more cost-effective, not just having the same capabilities. Yet even within OpenAI, the denominator aspect wasn’t always front of mind, which is why I wrote up a memo on this during my time there.
Until the recent discussions of o1 Pro / o3 drawing upon lots of inference compute, I rarely saw these ideas discussed, even in otherwise sophisticated analyses. One notable exception to this is my former teammate Richard Ngo's t-AGI framework [AF · GW], which deserves a read. METR has also done a great job of accounting for this in their research comparing AI R&D performance, given a certain amount of time. I am glad to see more and more groups thinking in terms of these factors - but in casual analysis, it is very easy to just slip into comparisons of capability levels. This is worth pushing back on, imo: The time at which "AI capabilities = human capabilities" is different than the time when I expect AGI will have been achieved in the relevant senses.
There are also some important caveats to my claim here, that 'comparing just by capability is missing something important':
- People reasonably expect AI to become cheaper over time, so if AI matches human capabilities but not cost, that might still signal 'AGI soon.' Perhaps this is what people mean when they say 'o3 is AGI'.
- Computers are much faster than humans for many tasks, and so one might believe that if an AI can achieve a thing it will quite obviously be faster than a human. This is less obvious now, however, because AI systems are leaning more on repeated sampling/selection procedures.
- Comparative advantage is a thing, and so AI might have very large impacts even if it remains less absolutely capable than humans for many different tasks, if the cost/speed are good enough.
- There are some factors that don't fit super cleanly into this framework: things like AI's 'always-on availability', which aren't about capability per se, but probably belong in the numerator anyway? e.g., "How good is an AI therapist?" benefits from 'you can message it around the clock', which increases the utility of any given task-performance. (In this sense, maybe the ratio is best understood as utility-per-resource, rather than capability-per-resource.)
↑ comment by Viliam · 2024-12-25T23:27:20.921Z · LW(p) · GW(p)
Human output is not linear to resources spent. Hiring 10 people costs you 10x as much as hiring 1, but it is not guaranteed that the output of their teamwork will be 10x greater. Sometimes each member of the team wants to do things differently, they have problem navigating each other's code, etc.
So it could happen that "1 unit of AI" is more expensive and less capable than 1 human, and yet "10 units of AI" are more capable than 10 humans, and paying for "1000 units of AI" would be a fantastic deal, because as an average company you are unlikely to hire 1000 good programmers. Also, maybe the deal is that you pay for the AI only when you use it, but you cannot repeatedly hire and fire 1000 programmers.
Replies from: sjadler↑ comment by sjadler · 2024-12-26T00:56:08.365Z · LW(p) · GW(p)
I agree, these are interesting points, upvoted. I’d claim that AI output also isn’t linear with the resources - but nonetheless, that you’re right that the curve of marginal return from each AI unit could be different from each human unit in an important way. Likewise, the easier on-demand labor of AI is certainly a useful benefit.
I don’t think these contradict the thrust of my point though? That in each case, one shouldn’t just be thinking about usefulness/capability, but should also be considering the resources necessary for achieving this.
Replies from: Viliam↑ comment by Viliam · 2024-12-27T00:45:33.454Z · LW(p) · GW(p)
I agree that the resources matter. But I expect the resources-output curve to be so different from humans that even the AIs that spend a lot of resources will turn out to be useful in some critical things, probably the kind where we need many humans to cooperate.
But this is all just guessing on my end.
Also, I am not an expert, but it seems to me that in general, training the AI is expensive, using the AI is not. So if it already has the capability, it is likely to be relatively cheap.