Posts

Comments

Comment by Hyperion on Why Should I Assume CCP AGI is Worse Than USG AGI? · 2025-04-20T00:01:08.047Z · LW · GW

There's also the possibility that a CCP AGI can only happen through being trained on Western data to some extent (i.e., the English language internet) because otherwise they can't scale data enough. This implies that it would probably be a "Marxism with Chinese characteristics [with American characteristics]" AI since it seems like that just raises the "alignment to CCP values" technical challenge difficulty a lot.

Comment by Hyperion on Thoughts on AI 2027 · 2025-04-15T14:58:15.602Z · LW · GW

If so much effort is being focused into AI research capability, I'd actually expect modally Agent-3 to be better than typical OpenBrain employee but completely incapable of replacing almost all employees in other fields. "capabilities are spiky" is a clear current fact about frontier AI, but your scenario seems to underestimate it.

Comment by Hyperion on Good Research Takes are Not Sufficient for Good Strategic Takes · 2025-04-02T04:40:07.062Z · LW · GW

I suppose I mean influence over politics, policy, or governance (this is very high level since these are all distinct and separable), rather than actually being political necessarily. I do think there are some common skills, but actually being a politician weighs so many other factors more heavily that the strategic skill is not selected on very strongly at all. Being a politician's advisor, on the other hand...

Yes, it's a special case, but importantly one that is not evaluated by Brier score or Manifold bucks.

Comment by Hyperion on Good Research Takes are Not Sufficient for Good Strategic Takes · 2025-03-31T16:38:44.165Z · LW · GW

I guess that's the main element I didn't mention: many people on this forum would suggest judging via predictive skill/forecasting success. I think this is an ok heuristic, but of course the long time horizons involved in many strategic questions makes it hard to judge (and Tetlock has documented the problems with forecasting over long time horizons where these questions matter most).

Mostly, the people I think of as having strong strategic skill are closely linked to some political influence (which implicitly requires this skill to effect change) such as attaining a senior govt position, being influential over the Biden EO/export controls, UK govt AI efforts, etc. Alternatively, they are linked to some big major idea in governance or technical safety, often by spotting something missing years before it became relevant.

Often by interacting regularly with good thinkers you can get a sense that they have stronger mental models for trends and the levers controlling trends than others, but concrete judgement is sometimes extremely difficult until a key event has passed and we can judge in hindsight (especially about very high level trends such as Mearsheimer's disputed take on the causes of the Ukraine invasion, Fukuyama's infamous "end of history" prediction, or even Pinker's "Better Angels of Our Nature" predictions about continually declining global conflict).

Comment by Hyperion on Good Research Takes are Not Sufficient for Good Strategic Takes · 2025-03-31T03:21:08.273Z · LW · GW

Nice post! As someone who spends a lot of time in AI policy on strategic thought and talking to people who I think are amongst the best strategic thinkers on AI, I appreciated this piece and think you generally describe the skills pretty well.

However, you say "research" skill by default does not lead to strategic skill, which is very true, but this varies drastically depending on the type of research! Mechanistic interpretability, in fact, appears to me to be an example of a field which is so in the weeds empirical with good feedback loops, that it makes it much harder for researchers in this field to learn better strategic thinking. Other research fields with slower feedback loops are different—for example, societal impacts of AI research. More broadly, I think many fields of social science train strategic skill well, and some of the best political science thinkers clearly have significant strategic skill: Fukuyama, James C. Scott, Dominic Cummings, etc.

I made an attempt to brainstorm ways to evaluate strategic skill based on the abilities of the best thinkers I know, and came up with a list of characteristics I think it is correlated with:

  • awareness of both the state of and fundamental principles behind politics and power. A recent example I was discussing with friends: if you read the transcript of the leaked Yemen signal group chat and could immediately write a 2 page doc with likely guesses of the power dynamics between the participants implied by their communication and roles, you probably have some of this awareness.
  • scenario-based thinking: the ability to cleanly split the future into distinct scenarios, reason about each independently, effectively simulating the different actors involved (which makes me wonder how much one could accelerate this with LLMs, c.f. simulator theory), before comparing different scenarios based on a small number of key drivers.
  • zooming in and zooming out: an ability to not get lost in the details, and put their focus where it is useful, whilst still being aware of the details or having the skill to dig in if necessary.
  • generalist knowledge: their skills are usually T shaped, rather than I shaped. They may have read widely across law/econ/political theory/philosophy/sociology/international relations/evo psych or may have background in different worlds (finance, startup ecosystem, defense contracting/intelligence community, etc). This generalism helps in many ways, but one of the biggest is simply that because much of the thinking in any close intellectual community is highly correlated, since everyone reads the same things, you need some source of fresh signal to come up with significant new insights.
  • an aptitude for the skill of translation, meaning translation of information and beliefs between frames/perspectives/ways of thinking—something that heavily overlaps with an ability to shape and tell believable narratives. This is the core skill you need to explain, for example, why compute is a useful lever for governing AI to politicians who don't know what a GPU is, but is also very useful to consider and discard lenses & framings on strategic problems until you can find the one which most cleanly carves up the situation.
  • an ability, similar to startup founders, to tolerate everyone thinking they are wrong (correlated with disagreeableness). Often, an exceptional strategic insight backchains to actions which look like they have a weak theory of change to everyone else, which means that good thinkers are often in the position of having to explain themselves.
  • context: having both the ability (through reading widely) and position (i.e. access to the thinking of significant figures in the field) to be high context on the big strategic problems. Context is that which is scarce (Cowen). I say context instead of seniority, because they are not quite the same, although strongly correlated—I occasionally meet a number of junior people who clearly are unusually high context for their experience level and have good strategic takes, which is one of the ways I spot promising talent.

 

Finally, I do notice a lot of those I think have the best strategic thought often use lenses and framings inspired by systems thinking, social evolution/selection processes, memetics, biology, and other similar ways of viewing society and human behavior.

Comment by Hyperion on Report on Frontier Model Training · 2023-08-30T20:49:33.828Z · LW · GW

This is very impressive work, well done! Improving compute/training literacy of the community is very valuable IMO, since I have often thought that not knowing much of this leads to poorer conclusions.

Comment by Hyperion on Inflection.ai is a major AGI lab · 2023-08-11T10:12:13.945Z · LW · GW

Note that the MLPerf benchmark for GPT-3 is not on the full C4 dataset, it's on 0.4% of the C4 dataset.

See: https://twitter.com/abhi_venigalla/status/1673813863186452480?s=20

Comment by Hyperion on A challenge for AGI organizations, and a challenge for readers · 2022-12-07T03:45:24.602Z · LW · GW

This is an intuition only based on speaking with researchers working on LLMs, but I think that OAI thinks that a model can simultaneously be good enough at next token prediction to assist with research but also be very very far away from being a powerful enough optimizer to realise that it is being optimized for a goal or that deception is an optimal strategy, since the latter two capabilities require much more optimization power. And that the default state of cutting edge LLMs for the next few years is to have GPT-3 levels of deception (essentially none) and graduate student levels of research assistant ability.

Comment by Hyperion on Why I think strong general AI is coming soon · 2022-10-02T22:04:08.007Z · LW · GW

I don't think it's odd at all - even a terrible chess bot can outplay almost all humans. Because most humans haven't studied chess. MATH is a dataset of problems from high school competitions, which are well known to require a very limited set of math knowledge and be solveable by applying simple algorithms. 

I know chain of thought prompting well - it's not a way to lift a fundamental constraint, it just is a more efficient targeting of the weights which represent what you want in the model.

It really isn't hard. No new paradigms are required. The proof of concepts are already implemented and work. It's more of a question of when one of the big companies decides it's worth poking with scale.

You don't provide any proof of this, just speculation, much of it based on massive oversimplifications (if I have time I'll write up a full rebuttal). For example, RWKV is more of a nice idea that is better for some benchmarks, worse for others, than some kind of new architecture that unlocks greater overall capabilities.

Comment by Hyperion on Why I think strong general AI is coming soon · 2022-10-02T17:53:00.693Z · LW · GW

I mean, to me all this indicates is that our conception of "difficult reasoning problems" is wrong and incorrectly linked to our conception of "intelligence". Like, it shouldn't be surprising that the LM can solve problems in text which are notoriously based around applying a short step by step algorithm, when it has many examples in the training set.

To me, this says that "just slightly improving our AI architectures to be less dumb" is incredibly hard, because the models that we would have previously expected to be able to solve trivial arithmetic problems if they could do other "harder" problems are unable to do that.

Comment by Hyperion on chinchilla's wild implications · 2022-08-01T19:28:49.578Z · LW · GW

Mostly Discord servers in my experience: EleutherAI is a big well known one but there are others with high concentrations of top ML researchers.

Comment by Hyperion on Everything I Need To Know About Takeoff Speeds I Learned From Air Conditioner Ratings On Amazon · 2022-07-24T19:15:03.360Z · LW · GW

I happened to be reading this post today, as Science has just published a story on a fabrication scandal regarding an influential paper on amyloid-β: https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease

I was wondering if this scandal changes the picture you described at all?

Comment by Hyperion on “Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments · 2022-04-27T19:16:12.841Z · LW · GW