SETI Predictions 2020-11-30T20:09:41.715Z
The next AI winter will be due to energy costs 2020-11-24T16:53:49.923Z
How GPT-N will escape from its AI-box 2020-08-12T19:34:48.410Z
Measuring hardware overhang 2020-08-05T19:59:00.003Z
Predictions for GPT-N 2020-07-29T01:16:10.678Z


Comment by hippke on Fun with +12 OOMs of Compute · 2021-03-06T10:48:41.149Z · LW · GW

At the Landauer kT limit, you need  kWh to perform your  FLOPs. That's 10,000x the yearly US electricity production. You'd need a quantum computer or a Dyson sphere to solve that problem.

Comment by hippke on SETI Predictions · 2020-12-01T18:43:20.490Z · LW · GW

Like AGI winter, a time of reduced funding

Comment by hippke on Measuring hardware overhang · 2020-08-06T17:33:52.925Z · LW · GW

Right. My experiment used 1 GB for Stockfish, which would also work on a 486 machine (although at the time, it was almost unheard of...)

Comment by hippke on Measuring hardware overhang · 2020-08-06T17:31:44.175Z · LW · GW

(a) The most recent data points are from CCRL. They use an i7-4770k and the listed tournament conditions. With this setup, SF11 has about 3500 ELOs. That's what I used as the baseline to calibrate my own machine (an i7-7700k).

(b) I used the SF8 default which is 1 GB.

(c) Yes. However, the hardware details (RAM, memory bandwidth) are not all that important. You can use these SF9 benchmarks on various CPUs. For example, the AMD Ryzen 1800 is listed with 304,510 MIPS and gets 14,377,000 nodes/sec on Stockfish (i.e., 19.9 nodes per MIPS). The oldest CPU in the list, the Pentium-150 has 282 MIPS and reaches 5,626 nodes/sec (i.e., 47.2 nodes per MIPS). That's about a factor of two difference, due to memory and related advantages. As we're getting that much every 18 months due to Moore's law, it's a small (but relevant) detail, and decreases the hardware overhang slightly. Thanks for bringing that up!

Giving Stockfish more memory also helps, but not a lot. Also, you can't give 128 GB of RAM to a 486 CPU. The 1 GB is probably already stretching it. Another small detail which reduces the overhang by likely less than one year.

There are a few more subtle details like endgame databases. Back then, these were small, constrained by disk space limitations. Today, we have 7-stone endgame databases through the cloud (they weigh in at 140 TB). That seems to be worth about 50 ELO.

Comment by hippke on Predictions for GPT-N · 2020-07-29T13:25:03.251Z · LW · GW

Regarding (1): Of course a step is possible; you never know. But for arithmetic, it is not a step. That may appear so from their poor Figure, but the data indicates otherwise.

Comment by hippke on Predictions for GPT-N · 2020-07-29T13:22:57.350Z · LW · GW

True. Do these tests scale out to super-human performance or are they capped at 100%?

Comment by hippke on Predictions for GPT-N · 2020-07-29T13:22:22.038Z · LW · GW

Except if you have an idea to monetarize one of these sub-tasks? An invest of order 10m USD in compute is not very large if you can create a Pokemon Comedy TV channel out of it, or something like that.