Posts

Comments

Comment by Tom Davidson on An illustrative model of backfire risks from pausing AI research · 2023-12-02T19:19:30.630Z · LW · GW

I think your model will underestimate the benefits of ramping up spending quickly today. 

You model the size of the $ overhang as constant. But in fact it's doubling every couple of years as global spending on producing on AI chips grows. (The overhang relates to the fraction of chips used in the largest training run, not the fraction of GWP spent on the largest training run.) That means that ramping up spending quickly (on training runs or software or hardware research) gives that $ overhang less time to grow

Comment by Tom Davidson on But why would the AI kill us? · 2023-05-13T05:14:28.282Z · LW · GW

Why are you at 50% ai kills >99% ppl given the points you make in the other direction?

Comment by Tom Davidson on Richard Ngo's Shortform · 2023-01-07T00:12:27.729Z · LW · GW

So far causally upstream of the human evaluator's opinion? Eg an AI counselor optimizing for getting to know you

Comment by Tom Davidson on Richard Ngo's Shortform · 2023-01-05T17:33:18.437Z · LW · GW

I think the "soup of heuristics" stories (where the AI is optimizing something far causally upstream of reward instead of something that is downstream or close enough to be robustly correlated) don't lead to takeover in the same way

Why does it not lead to takeover in the same way?

Comment by Tom Davidson on On the Diplomacy AI · 2022-11-29T15:05:01.878Z · LW · GW

AI understands that the game ends after 1908 and modifies accordingly.

Does it? In the game you link it seems like the bot doesn't act accordingly in the last move phase. Turkey misses a chance to grab Rumania, Germany misses a chance to grab London, and I think France misses something as well.

Comment by Tom Davidson on Towards a Formalisation of Returns on Cognitive Reinvestment (Part 1) · 2022-06-13T03:51:01.118Z · LW · GW

Glad you added these empirical research directions! If I were you I'd prioritize these over the theoretical framework.

Comment by Tom Davidson on What can the principal-agent literature tell us about AI risk? · 2020-02-13T01:39:43.011Z · LW · GW
So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature.

I agree that the Bostrom/Yudkowsky scenario implies AI-related unawareness is of a very different scale from ordinary human cases. From an outside view perspective, this is a strike against the scenario. However, this deviation from past trends does follow fairly naturally (though not necessarily) from the hypothesis of a sudden and massive intelligence gap

Comment by Tom Davidson on What can the principal-agent literature tell us about AI risk? · 2020-02-10T21:56:07.649Z · LW · GW

Re the difference between Monopoly rents and agency rents: monopoly rents would be eliminated by competition between firms whereas agency rents would be eliminated by competition between workers. So they're different in that sense.