Posts

We are headed into an extreme compute overhang 2024-04-26T21:38:21.694Z

Comments

Comment by devrandom on Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety · 2024-09-20T09:07:23.974Z · LW · GW

There seem to be substantial problems with low probability events, coherent predictions over time, short term events, probabilities adding up to more than 100%, etc

 

A probabilistic oracle being inconsistent is completely besides the point.  If I have a probabilistic oracle that has high accuracy but is sometimes inconsistent, I can just post-process the predictions to force them into a consistent format. For example, I can normalize the probabilities to 100%.

The economic value is in the overall accuracy. Being consistent is a cosmetic consideration.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-06-26T09:57:57.621Z · LW · GW

New Transformer specific chips from Etched are in the works.  This might make inference even cheaper compared to compute.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-06-18T09:10:47.418Z · LW · GW

Post from Epoch AI about trading off training compute against inference compute.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-06-06T19:26:17.809Z · LW · GW

These are good points.

But don't the additional GPU requirements apply equally to training and inference?  If that's the case, then the number of inference instances that can be run on training hardware (post-training) will still be on the order of 1e6.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-05-08T19:58:35.478Z · LW · GW

https://www.lesswrong.com/posts/aH9R8amREaDSwFc97/rapid-capability-gain-around-supergenius-level-seems also seems relevant to this discussion.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-05-01T12:01:05.221Z · LW · GW

The main advantage is that you can immediately distribute fine-tunes to all of the copies.  This is much higher bandwidth compared to our own low-bandwidth/high-effort knowledge dissemination methods.

The monolithic aspect may potentially be a disadvantage, but there are a couple of mitigations:

  • AGI are by definition generalists
  • you can segment the population into specialists (see also this comment about MoE)
Comment by devrandom on We are headed into an extreme compute overhang · 2024-05-01T11:54:35.146Z · LW · GW

I think this only holds if fine tunes are composable [...] you probably can't take a million independently-fine-tuned models and merge them [...]

 

The purpose of a fine-tune is to "internalize" some knowledge - either because it is important to have implicit knowledge of it, or because you want to develop a skill.

Although you may have a million instances executing tasks, the knowledge you want to internalize is likely much more sparse.  For example, if an instance is tasked with exploring a portion of a search space, and it doesn't find a solution in that portion, it can just summarize its finding in a few words.  There might not even be a reason to internalize this summary - it might be merged with other summaries for a more global view of the search landscape.

So I don't see the need for millions of fine-tunes.  It seems more likely that you'd have periodic fine-tunes to internalize recent progress - maybe once an hour.

The main point is that the single periodic fine-tune can be copied to all instances.  This ability to copy the fine-tune is the main advantage of instances being identical clones.

Comment by devrandom on We are headed into an extreme compute overhang · 2024-04-27T10:09:06.419Z · LW · GW

On the other hand, the world already contains over 8 billion human intelligences. So I think you are assuming that a few million AGIs, possibly running at several times human speed (and able to work 24/7, exchange information electronically, etc.), will be able to significantly "outcompete" (in some fashion) 8 billion humans? This seems worth further exploration / justification.

 

Good point, but a couple of thoughts:

  • the operational definition of AGI referred in the article is significantly stronger than the average human
  • the humans are poorly organized
  • the 8 billion humans are supporting a civilization, while the AGIs can focus on AI research and self-improvement
Comment by devrandom on We are headed into an extreme compute overhang · 2024-04-27T10:00:34.328Z · LW · GW

Thank you, I missed it while looking for prior art.

Comment by devrandom on Evolution Solved Alignment (what sharp left turn?) · 2023-11-16T15:18:43.812Z · LW · GW

If we haven't seen such an extinction in the archaeological record, it can mean one of several things:

  • misalignment is rare, or
  • misalignment is not rare once the species becomes intelligent, but intelligence is rare or
  • intelligence usually results in transcendence, so there's only one transition before the bio becomes irrelevant in the lightcone (and we are it)

We don't know which.  I think it's a combination of 2 and 3.

Comment by devrandom on Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent · 2023-08-11T10:27:18.698Z · LW · GW

The app is not currently working - it complains about the token.

Comment by devrandom on LOVE in a simbox is all you need · 2023-06-18T13:03:20.206Z · LW · GW

and thus AGI arrives - quite predictably[17] - around the end of Moore's Law

 

Given that the brain only consumes 20 W because of biological competitiveness constraints, and that 200 KW only costs around $20/hour in data centers, we can afford to be four OOMs less efficient than the brain while maintaining parity of capabilities.  This results in AGI's potential arrival at least a couple of decades earlier than the end of Moore's Law.