Posts

Comments

Comment by Peter Jin (peterhj) on Cultural accumulation · 2020-12-06T14:22:22.796Z · LW · GW

Another manifestation of cultural-artifact co-accumulation is binary bootstrapping, e.g. as used for building compilers. In this case, the correspondence between culture and artifact is rather direct: a culturally impactful idea to introduce an addition or change to a programming language must eventually make its way into the compiler source, which itself needs to be compiled into a new binary artifact via existing binary artifacts of older versions of the compiler or of other programs. As the programming language accumulates new ideas, you require newer binary artifacts as well (even if you can store all the binary artifacts). And, as new programmers learn newer programming languages with newer ideas, the older languages gradually fall into disuse. Working with uncommonly known old binary artifacts then becomes a niche field (e.g. maintenance of legacy systems) or a fun hobby (i.e. retro-computing).

Comment by Peter Jin (peterhj) on Measuring hardware overhang · 2020-08-09T20:04:11.959Z · LW · GW

Wow, thanks for the very comprehensive response. (Also fun to see someone has compiled a modern chess engine on early-mid-90s hardware and shared their results.)

Comment by Peter Jin (peterhj) on Measuring hardware overhang · 2020-08-05T21:51:11.801Z · LW · GW

Thanks for writing this post. I have a handful of quick questions: (a) What was the reference MIPS (or the corresponding CPU) you used for the c. 2019-2020 data point? (b) What was the constant amount of RAM you used to run Stockfish? (c) Do I correctly understand that the Stockfish-to-MIPS comparison is based on the equation [edit: not sure how to best format this LaTeX...]:

So, your post piqued my interest to investigate the Intel 80486 a bit more with the question in mind: how comparable are old vs. new CPUs according not just to MIPS but also to other metrics?

  • Instructions per second (MIPS): Using the Wikipedia MIPS source, the 80486 has 70 MIPS at 100 MHz, whereas a recent Skylake 8-core CPU (i9-9900K) has around 50,000 MIPS/core at a clock of 4.7 GHz, and closer to 40,000 MIPS/core at a sustained clock of 3.6 GHz.
  • Memory bandwidth: From Wikipedia, the 80486 has a 33 MHz bus and a data rate of up to 32 bits, so roughly 130 MB/s, whereas the 9900K is closer to 40 GB/s (from Intel).
  • Memory latency: The 80486 takes 5 bus cycles at 33 MHz, or 15 CPU cycles at 100 MHz, to access memory (16 byte cache line size, from comp.arch). From Anandtech, Skylake can take 100 cycles to access memory (64 byte cache line size, typical for x86_64).
  • Memory capacity: The 80486 is a 32-bit CPU and can address up to 4 GB of RAM. Again from Wikipedia, the 80486 accepts SIMM form factor RAM, with up to 2 GB capacity (but tens of MB may have been more commonly available). On the other hand, the 9900K supports a maximum memory capacity of 128 GB.

It would seem that improvements in MIPS have outpaced improvements in memory-related metrics, e.g. memory latency in units of cycles has gotten worse. What I don't know is how sensitive Stockfish is to variations in memory performance. However I would conclude that I'd update the estimated SF8 ELO on older hardware upwards when additionally accounting for the effect of memory-related metrics in addition to MIPS. In other words, it seems more likely to me that the SF8 ELO curve is underestimated when you include both MIPS and memory-related effects.

(There is another direction one could follow: get Stockfish to compile on an 80486 or other old hardware that one has lying around, and report back with results.)

Comment by Peter Jin (peterhj) on What are the most important papers/post/resources to read to understand more of GPT-3? · 2020-08-03T01:26:49.643Z · LW · GW

nostalgebraist's blog is a must-read regarding GPT-x, including GPT-3. Perhaps, start here ("the transformer... 'explained'?"), which helps to contextualize GPT-x within the history of machine learning.

(Though, I should note that nostalgebraist holds a contrarian "bearish" position on GPT-3 in particular; for the "bullish" case instead, read Gwern.)

Comment by Peter Jin (peterhj) on Erving Goffman’s ‘paper’ · 2020-07-18T03:39:20.517Z · LW · GW

There's a reference in the footnotes of Schelling p. 116 to a paper by Goffman, "On Face-Work" (Psychiatry 18:224, 1955). The same article was republished in Goffman's book Interaction Ritual (1967).

Comment by Peter Jin (peterhj) on What problem would you like to see Reinforcement Learning applied to? · 2020-07-08T15:14:53.610Z · LW · GW

You might be interested to learn about some recently announced work on training agents with reinforcement learning to play "no-press" Diplomacy:

Comment by Peter Jin (peterhj) on Why anything that can be for-profit, should be · 2020-04-30T19:32:48.588Z · LW · GW

Thanks, that's a clarifying distinction.

Comment by Peter Jin (peterhj) on Why anything that can be for-profit, should be · 2020-04-30T18:40:25.201Z · LW · GW

Agree that specifics are important here. Some specifically interesting examples to me where non-profit and for-profit models overlap:

  • A university is set up as a non-profit org, receiving charitable donations from alums and other institutions or individuals. The university's main non-profit activities are education and research. The university also wholly owns a for-profit org (basically, a hedge fund) which is used to manage the university's endowment. edit: actually, an endowment fund also counts as regulatory non-profit if its sole purpose is to fund a non-profit's activity
  • Mozilla Foundation and Mozilla Corporation. Mozilla Foundation is a non-profit org that also wholly owns the for-profit org Mozilla Corp. Mozilla Foundation's main non-profit activities seem to be internet advocacy and funding other related projects. My impression is that Mozilla Corp. derives most of its income from search engine placement in Firefox, and then Mozilla Foundation is subsequently funded by Mozilla Corp.'s profits. I haven't looked in detail though, so I may be off.

Note that in the above two examples, I've been using the terms "for-profit" and "non-profit" in a primarily regulatory sense, i.e. for-profit = corporation, LP, etc. vs. non-profit = 501(c)(3). In those examples, the terms also seem to map onto their "intentional" sense, but it's unclear what form a general rule might take to disentangle "for-profit" vs "non-profit" in their regulatory vs "intentional" senses.

Comment by Peter Jin (peterhj) on Why anything that can be for-profit, should be · 2020-04-30T04:14:14.481Z · LW · GW

One other important feedback "loop," or rather a feedback terminal, is an M&A event. The for-profit organization's owners receive a single injection of $ from a new parent organization, and then the for-profit organization (a) continues operating as a separate subsidiary of the parent, or (b) ceases its separate existence, getting liquidated into the parent. (Various outcomes in between can also occur.)

I'm curious whether there is an analog of this sort of M&A "loop" with non-profit organizations. If there is no such analog, then we have two broken feedback loops in non-profits: an indifferent product/sales loop and a nonexistent M&A "loop." How relatively important are the two broken loops in explaining the strengths and weaknesses of for-profit vs non-profit models?

Comment by Peter Jin (peterhj) on What are some fun ways to spend $100,000? · 2020-04-23T04:33:58.432Z · LW · GW

One might even find it doubly fun to give a random internet forum participant $100,000.