In Defence of Spock 2021-04-21T21:34:04.206Z
Zac Hatfield Dodds's Shortform 2021-03-09T02:39:33.481Z


Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [AN #152]: How we’ve overestimated few-shot learning capabilities · 2021-06-17T10:30:39.014Z · LW · GW

Testing with respect to learned models sounds great, and I expect there's lots of interesting GAN-like work to be done in online adversarial test generation.

IMO there are usefully testable safety invariants too, but mostly at the implementation level rather than system behaviour - for example "every number in this layer should always be finite". It's not the case that this implies safety, but a violation implies that the system is not behaving as expected and therefore may be unsafe.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [AN #152]: How we’ve overestimated few-shot learning capabilities · 2021-06-16T23:10:28.671Z · LW · GW

High Impact Careers in Formal Verification: Artificial Intelligence

My research focuses on advanced testing and fuzzing tools, which are so much easier to use that people actually use them - eg in Pytorch, and I understand in Deepmind. If people seem interested I could write up a post on relevance to AI safety in a few weeks.

Core idea: even without proofs, writing out safety properties or other system invariants in code is valuable both (a) for deconfusion, and (b) because we can have a computer search for counterexamples using a variety of heuristics and feedbacks. At the current margin this tends to improve team productivity and shift ML culture towards valuing specifications, which may be a good thing for AI x-risk.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-11T08:48:13.931Z · LW · GW

No, I think we mostly agree - I'd expect TPUs to be with say 4x of practically optimal for the things they do. The remaining one OOM I think is possible for non-novel tasks has more to do with specialisation, eg model-specific hardware design, and that definitely has an asymtote.

The interesting case is if we can get TPU-equivalent hardware days after designing a new architecture, instead of years after, because (IMO) 1,000x speedups over CPUs are plausible.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-10T14:16:25.024Z · LW · GW

Yes, that's a fair summary - though in "not hard ... if you design custom hardware" the second clause is doing a lot of work.

As to the magnitude of improvement, really good linear algebra libraries are ~1.5x faster than 'just' good ones, GPUs are a 5x-10x improvement on CPUs for deep learning, and TPUs 15x-30x over Google's previous CPU/GPU combination (this 2018 post is a good resource). So we've already seen 100x-400x improvement on ML workloads by moving naive CPU code to good but not hyper-specialised ASICs.

Truly application-specific hardware is a very wide reference class, but I think it's reasonable to expect equivalent speedups for future applications. If we're starting with something well-suited to existing accelerators like GPUs or TPUs, there's less room for improvement; on the other hand TPUs are designed to support a variety of network architectures and fully customised non-reprogrammable silicon can be 100x faster or more... it's just terribly impractical due to the costs and latency of design and production with current technology.

For example, with custom hardware you can do bubblesort in time, by adding a compare-and-swap unit between the memory for each element. Or with a 2D grid of these, you can pipeline your operations and sort lists in time and latency! Matching the logical structure of your chip to the dataflow of your program is beyond the scope of this article (which is "just" physical structure), but also almost absurdly powerful.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-10T11:53:09.974Z · LW · GW

Circuit design is the main bottleneck for use of field-programmable gate arrays. If fully-automated designs become good enough, we could see substantial gains from having optimising compilers output a gate layout rather than machine code for an xPU or specific accelerator. We already have some such compilers, and this looks like a meaningful step towards handling non-toy-scale problems with them.

The main change here wouldn't be so much training speed - we already have TPUs etc. to accelerate current workloads, and fabricating a new design as ASICs rather than FPGA layouts takes months-to-years at scale - but rather the latency with which we can try out custom hardware for novel ML paradigms such as transformers. What is to transformers as TPUs are to CNNs? Specifically for novel tasks, this could be a 10x-1000x speedup, and 2x-50x speedup for existing workloads... though I understand they're bottlenecked more on data movement between nodes than compute.

TLDR: a small step in a high-long-term-impact trend.

(Source: while I'm not a hardware specialist, I've worked with the PyMTL team at Cornell on verification and validation of their Python-to-Verilog-to-silicon hardware design tools, followed high-level developments in custom compute hardware for around a decade, and worked on peta-scale supercomputing for a few years.)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Changing my life in 2021, halfway through · 2021-06-10T05:35:08.743Z · LW · GW

I know very little about good personal financial management other than that ideally revenue > expenses. If you found any source for learning about personal finance useful please post it.

For day-to-day personal finance, "disposable income > expenses" is sufficient - automate payments to long-term savings, rent, etc; and then spend the balance as you will. Some people get a lot of value out of detailed budgeting techniques or tools, but IMO that's mostly personal preference.

The best short introduction to personal finance for the long term is William J Bernstein's If You Can: how millenials can get rich slowly (pdf). It's only sixteen pages long, with recommended follow-up reading and actions for your second pass through.

Before considering any departure from the conventional wisdom of low-fee diversified index funds, you should also read Inadequate Equilibria and some of Taleb (I usually suggest Fooled by Randomness and Antifragile).

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T05:34:46.970Z · LW · GW

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Let's use Toby Ord's categorisation - and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:

  • nuclear war - avoids accidental/fast escalation; unlikely to help in deliberate war
  • extreme climate change or environmental damage - avoids this risk entirely
  • engineered pandemics - strong mitigation
  • unaligned artificial intelligence - lol nope.
  • dystopian scenarios - unlikely to help

So Mars colonisation handles about half of these risks, and maybe 1/4 of the total magnitude of risks. It's a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Maximizing Yield on US Dollar Pegged Coins · 2021-06-08T05:21:43.336Z · LW · GW

You are picking up pennies in front of a steamroller.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-08T05:17:46.033Z · LW · GW

He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink.

Both of these examples betray an extremely naive understanding of AI risk.

  • OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone - probably someone in a hurry - getting a decisive strategic advantage.
  • Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea, etc. I'm also unenthusiastic on technical grounds.
  • SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)

So I'd attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Restoration of energy homeostasis by SIRT6 extends healthy lifespan · 2021-06-05T23:03:47.942Z · LW · GW


(I know that's the title of the Nature paper, and kudos for stating "in mice" more prominently in the post body than the paper did, but IMO it's worth appending to the title.)

While most SIRT1 knockout mice die perinatally, in a few weeks age, 129svJ background SIRT6 knockout mice exhibit severe developmental defects but survive to about 4 weeks of age. Similarly, in humans and primates, mutations resulting in SIRT6 inactivation result in prenatal or perinatal lethality accompanied by severe developmental brain defects.

This is maybe interesting as a suggestion of which pathways to investigate for aging-related loss of cellular energy homeostasis, but it's not even plausible that it could be therapeutic in humans.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Donating Bitcoin to Crisis Zones - Is there a platform collating and verifying public key address for individuals in conflict zones which allows donors to send Bitcoin directly to them? · 2021-05-27T14:12:58.052Z · LW · GW

the idea of donating directly to people in need is very attractive

GiveDirectly are world-class experts in efficiently transferring money to people in extreme poverty who need it most, including validation and ensuring that it arrives in a useful (i.e. spendable) form. accepts Bitcoin, Ethereum, and even Dogecoin.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-26T15:10:54.324Z · LW · GW

Unusually competent does not rule out stupid mistakes like permitting airline pilots to quarantine for only three days, as if they had a better respiratory system than other humans.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Zac Hatfield Dodds's Shortform · 2021-05-25T02:09:39.049Z · LW · GW

Downgrading my competence estimation: Taiwan and Singapore's current coronavirus surge should serve as a warning to Australia (ABC Australia). Excerpts:


Taiwan's status for having successfully contained the virus was challenged in April ... Rules had been relaxed prior to the outbreak, allowing pilots to quarantine for three days instead of the full 14.

At first, infections were reported from pilots, hotel workers and their family members. ... Taiwanese were staying at the same hotel as the quarantining pilots. From there, the virus is believed to have made its way into Taipei's Wanhua district, known for its "tea houses" ... Many who tested positive were unwilling to declare they had visited such adult entertainment venues, making contact tracing more difficult.


Even as Singapore was being celebrated, cases were quietly spreading through the island's one vulnerable location: Changi International Airport. It's believed that airport workers who came into contact with travellers from high-risk nations may have contracted the virus before visiting Changi's food court, which is open to the public.

Many of the cases linked to the airport cluster were later found to have a highly contagious Indian variant, known as B.1.617. ... "It's not like everything was relaxed in Singapore. It's not like behaviour has changed in the last six months. But I do think we've got a less-forgiving virus, which is more easily transmitted,"

Only 29 per cent of Singaporeans have received one dose. ... They're now considering lengthening the time between doses and vaccinating younger adults.

How a similar scenario would play out in Australia

What the recent outbreaks in Singapore and Taiwan show is that successful containment strategies can be thwarted by complacency and a failure to identify and act quickly to contain quarantine breaches.

Comment by zac-hatfield-dodds on [deleted post] 2021-05-21T04:04:17.210Z

I'm fond of Lightman's description of what it means to be a public intellectual. Paraphrased for brevity:

When a person trained in a particular discipline, and on the faculty of a university, decides to write and speak to a larger audience than their professional colleagues, he or she becomes a "public intellectual."

Level I: Speaking and writing for the public exclusively about your discipline. This kind of discourse is extremely important, and it involves good, clear, simplified explanations of the national debt, the how cancer genes work, etc.

Level II: Speaking and writing about your discipline and how it relates to the social, cultural, and political world around it. A scientist in this category might include a lot of biographical material, glimpses into the society and anthopology of the culture of science. For example, James Watson's The Double Helix, ...

Level III: The intellectual has become elevated to a symbol, a person that stands for something far larger than the discipline from which he or she originated. A Level III intellectual is asked to write and speak about a large range of public issues, not necessarily directly connected to their original field of expertise at all. After he became famous in 1919, Einstein was asked to give public addresses on religion, education, ethics, philosophy, and world politics. Einstein had become a symbol of gentle rationality and human nobility.

It's also worth reading this reflection on the public/civic role of British philosophers from sixies through nineties; in particular it's an interesting contrast with the American pattern of public-intellectual scientists from the seventies or so onwards.

Comment by zac-hatfield-dodds on [deleted post] 2021-05-18T03:00:22.843Z

Quoting myself last week:

I don't want our analysis to lose sight of the fact that facing these tradeoffs is stupid and avoidable, and that almost every country could have done so much better. Avoiding outbreaks is so much cheaper and easier than dealing with them that the choice to do so should have been overdetermined. ...

It's much better to be careful instead of exponential growth than after it. The policy playbook we learn from COVID should be how and why to avoid such situations, not how to live with for extended periods.

You don't need infeasible surge pricing for the-right-to-buy-groceries, and you don' t need to fine-tune the number of people at live entertainment, if you competently follow a (any!) coherent cost-benefit model because you'll keep and there won't be a pandemic in the first place.

Australia mostly did this. New Zealand did this. Taiwan did this. There's no secret! It's not even difficult!.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Who has argued in detail that a current AI system is phenomenally conscious? · 2021-05-14T22:25:14.020Z · LW · GW

Andrej Karpathy's Forward Pass is the closest I know of.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on MikkW's Shortform · 2021-05-14T22:02:08.082Z · LW · GW

The problem is that employers can't take your word for it, because there are many people who claim the same but are lying or honestly mistaken.

Do you have, or can you create, a portfolio of things you've done? Open-soure contributions are good for this because there's usually a review process and it's all publicly visible.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Biological Holism: A New Paradigm? · 2021-05-10T07:00:40.540Z · LW · GW

I think that we agree that it can be useful to model many processes as optimization.

My point is that it's dangerous to lose the distinction between "currently useful abstraction" and "it actually is optimization" - much like locally-optimal vs. globally-optimal, it's a subtle confusion but can land you in deep confusion and on an unsound basis for intervention. Systems people seem particuarly prone to this kind of error, maybe because of the tendency to focus on dynamics rather than details.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Biological Holism: A New Paradigm? · 2021-05-10T03:49:14.066Z · LW · GW

My first problem with holism, or higher order cybernetics generally, is that while it's an interesting and sometimes illuminating perspective, it doesn't give me useful tools. It's not even really a paradigm, in that it provides no standard methods (or objects) of enquiry, doesn't help much with interpretation, etc.

The "AI effect" (as soon as it works, no one calls it AI any more) has a similar application in cybernetics: the ideas are almost omnipresent in modern sciences and engineering, but we call them "control engineering" or "computer science" or "systems theory" etc. Don't get me wrong: this was basically my whole degree ("interdisciplinary studies"), I've spend the last few years at ANU's School of Cybernetics (eg), and I love it, but it's more of a worldview than a discipline. I frame problems with this lens, and then solve them with causal statistics or systems theory or HCI or ...

My second problem is that it leaves people prone to thinking that they have a grand theory of everything, but without the expertise to notice the ways in which they're wrong - or humility to seek them out. Worse, these details are often actually really important to get right. For example:

I want to reiterate: optimization is a lens through which you can view the behavior of nonlinear systems. There's no need to take it literally. Well, it's a matter of personal choice. If thinking about the world in these terms make you feel better, there's no harm in it I suppose.

I strongly disagree: viewing most systems as optimizers will mislead you, both epistemically and affectively. We have a whole tag for "Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers."! The difference really matters!

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Mark Miro's Shortform · 2021-05-09T02:22:15.581Z · LW · GW

What, specifically, is a '10x result'? How would the editor(s) recognise such results?

I suspect that the closest thing to what you're thinking of is test of time awards.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Occam’s Guillotine · 2021-05-08T01:26:22.309Z · LW · GW

And also, you totally can build a bridge without knowing the tensile strength of steel or the compressive strength of concrete!

  • Trial and error works, and with scale models (plus awareness of square/cube scaling etc) it's cheap too
  • Bridges can also be made out of stone, wood, rope, iron, living roots, ...

Engineering regularly works with materials and processes where we don't understand the relevant underlying science - for example, we spent most of the 20th century doing aerodynamics with wind tunnels and test flights rather than fluid dynamics.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Occam’s Guillotine · 2021-05-08T01:05:31.661Z · LW · GW

The 'Igon Value' effect

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Let's Go Back To Normal · 2021-05-06T14:51:15.312Z · LW · GW

I think it's a valuable post, and agree that as an individual in the USA in 2021 it's worth thinking carefully about these tradeoffs. In Australia though, it's trivial to avoid facing these tradeoffs, because of the different policies we followed through 2020. (I will never claim they were great policies, but they were good enough)

My broader point is that the policy playbook we learn from COVID should be how and why to avoid such situations, not about how to live with for extended periods. Just do the proper lockdown for four-six weeks at the start instead of the end, and it's over! We wouldn't even need vaccines, let alone masks!

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Let's Go Back To Normal · 2021-05-06T01:02:16.278Z · LW · GW

This approach to tradeoffs makes sense for the USA in 2021.

I just don't want our analysis to lose sight of the fact that facing these tradeoffs is stupid and avoidable, and that almost every country could have done so much better. Avoiding outbreaks is so much cheaper and easier than dealing with them that the choice to do so should have been overdetermined.

  • The background risk rate in Australia is roughly zero. We occasionally get "outbreaks" of single-digit cases, lock down one city for a few days to trace it, and then go back to normal.
  • It's not even worth wearing masks here.
  • Australia is taking a (frustratingly) slow and cautious approach to the vaccine rollout. This will probably cost zero lives (though with a scary right-tail), plausibly saving some from avoided adverse reactions. IMO we should be going way faster on tail-risk and economic grounds, but...

TLDR: it's much better to be careful instead of exponential growth than after it.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What do the reported levels of protection offered by various vaccines mean? · 2021-05-05T09:58:26.834Z · LW · GW

See COVID-19 vaccine efficacy and effectiveness in The Lancet:

Vaccine efficacy is generally reported as a relative risk reduction—ie, the ratio of attack rates [i.e. any symptomatic infection] with and without a vaccine.

Ranking by reported efficacy gives relative risk reductions of 95% for the Pfizer–BioNTech, 94% for the Moderna–NIH, 90% for the Gamaleya, 67% for the J&J, and 67% for the AstraZeneca–Oxford vaccines. However, RRR should be seen against the background risk of being infected and becoming ill with COVID-19, which varies between populations and over time.

The good news is that the vaccines (particularly mRNA vaccines) are also very effective at preventing severe disease conditional on any symptoms, and reasonably effective at preventing death conditional on severe disease - so in an important sense, the 95% figure is an underestimate of the relevant risk. In particular, the AstraZeneca vaccine has much better than 67% relative risk reduction of severe COVID.

It would be nice to have better information on prevention of transmission and asymptomatic infection too, but my understanding is that they're good enough that the challenge in reaching is almost entirely the fraction of people vaccinated (i.e. politics and public communications).

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ACrackedPot's Shortform · 2021-05-05T00:18:39.018Z · LW · GW

You have (re)invented delay-line memory!

Acoustic memory in mercury tubes was indeed used by most of first-generation electronic computers (1948-60ish); I love the aesthetic but admit they're terrible even compared to electromagnetic delay lines. An even better (British) aesthetic would be Turing's suggestion of using Gin as the acoustic medium...

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on There’s no such thing as a tree (phylogenetically) · 2021-05-04T08:07:11.772Z · LW · GW

Height is also useful for reducing impact of fires, herbivores, some parasites, etc.; and gives you substantially better volume-of-airflow-over-leaves which can be helpful - a flat sheet of leaf-material would underperform substantially for respiration, even before considering the variable angle of sunlight for photosynthesis.

With some handwaving, we seem to agree that "the absence of trees becoming grass-like indicates that there's no nice/large path in evolution-trajectory-space which is continuously competitive" and I'm gesturing towards the known-to-be-difficult C3/C4 distinction as a potentially-relevant feature of that space.

Note that while our non-expert speculation might turn up interesting relevant considerations, the space is very complicated and high-dimensional, and I at least have very little data or subject matter expertise. I therefore expect my analysis to be wrong, though I do enjoy and learn from doing it.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on The Schelling Game (a.k.a. the Coordination Game) · 2021-05-03T23:31:03.600Z · LW · GW

Dixit, which has similar gameplay, does develop group-independent skills - though in-group references often dominate skill.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on There’s no such thing as a tree (phylogenetically) · 2021-05-03T07:14:21.180Z · LW · GW

Why don’t more plants evolve towards the “grass” strategy?

I suspect it's related to the distinction between C3 and C4 photosynthesis - both are common in grasses and C4 species tend to do better in hot climates, but trees seem to have trouble evolving C4 pathways even though that happened on 60+ separate occasions.

(also IMO monocots top out at "kinda tree-ish" - they do have a recognisable trunk, but more fibrous than woody)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Viliam's Shortform · 2021-05-03T03:02:33.265Z · LW · GW

While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:

  • Land also has use-value, Bitcoin doesn't
  • Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
  • Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don't value Bitcoin's advantages...
  • Full-Georgist 'land' taxes disincentivise searching for superior uses (IMO still better than most current taxes, worse than Pigou-style taxes on negative externalities)
Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [Letter] Re: Advice for High School · 2021-04-30T13:21:06.015Z · LW · GW


For tech history - it's worth knowing how modern industrial civilisation arose! - I'd recommend

Why read old books to understand technology? Because they come for a different world-view and make very different assumptions about the direction that things are going - because they have only the context of their past, and can't fit it to the usual narratives about WWII and post-war economic and industrial history. "The books of the future would be just as good a corrective as the books of the past, but unfortunately we cannot get at them."

The Code Book: The Science of Secrecy From Ancient Egypt to Quantum Cryptography by Simon Singh

I haven't re-read it in years, but this is the book that got me interested in computer science (and later reading The Art of Unix Programming on a hike got me into software engineering).

I'd also recommend Quantum Computing Since Democritus by Scott Aaronson as the single best introduction to quantum computing from someone who actually knows how it works and what it can't do.

Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed by James C. Scott

Disagree - it's a good book, but you're better off reading the linked review and then James C. Scott's Two Cheers for Anarchism instead.

The Black Swan by Nassim Nicholas Taleb

A colorful author, but there's plenty to learn from his books. If you can read more than one, I'd suggest Fooled by Randomness and then Antifragile instead (the preceeding and following books; between them they cover almost all of The Black Swan).

On the mathematical end it's also worth skimming through his Statistical Consequences of Fat Tails. Pair with Gwern's statistical notes, and if you're going to do it properly Judea Pearl's Causality and E.T. Jaynes' Probability Theory: The Logic of Science.


Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [Linkpost] Treacherous turns in the wild · 2021-04-29T02:21:46.204Z · LW · GW

Trying to unpack why I don't think of this as a treacherous turn:

  • It's a simple case of a nearest unblocked strategy
  • I'd expect a degree of planning and human-modelling which were absent in this case. A 'deception phase' based on unplanned behavioural differences in different environments doesn't quite fit for me.
  • Neither the evolved organisms nor the process of evolution are sufficiently agentlike that I find the "treacherous turn" to be a useful intuition pump.

I think it's mostly the intuition-pump argument; there are obviously risks that you evolve behaviour that you didn't want (mostly but not always via goal misspecification), but the treacherous turn to me implies a degree of planning and possibly acausal cooperation that would be very much more difficult to evolve.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What topics are on Dath Ilan's civics exam? · 2021-04-27T14:04:18.815Z · LW · GW

A reasonable argument could be made that in our form of democracy, civics knowledge is of little use to the average citizen. This is because that each of us has such an infinitesimal 'vote', and each person well educated in civics has their vote drowned out.

IMO the assumption that civics knowledge is only useful when voting, is itself a concerning failure of civics education. Above-average civics knowledge might reveal high-value opportunities such as advocacy, focussed policy submissions, talking to friends about particular policies, raising public awareness of important problems, etc.

Increasing the average level of civics knowledge is also (again, IMO) very valuable. The obvious benefits include that this disproportionately benefits good policymaking; beyond that I'd also expect volunteering to become both more common and more effective, along with improved coordination generally. Civics is basically the study of "how does our society coordinate", after all!

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [Linkpost] Treacherous turns in the wild · 2021-04-27T00:47:35.697Z · LW · GW

I would not call this a treacherous turn - the "treachery" was a regular and anticipated behaviour, and "evolve higher replication rates in the environment" is a pretty obvious outcome.

Suppressing-and-ignoring failed "treachery" in the sandbox just has the effect of adding selection pressure towards outcomes that the censor doesn't detect. Important lesson from safety engineering: you need to learn from near misses, or you'll eventually have a nasty accident. In a real turn, you don't get this kind of warning.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Malicious non-state actors and AI safety · 2021-04-26T03:34:24.301Z · LW · GW

Instead, I'm worried about the sort of person who become a mass-shooter or serial killer. ... I'm worried about people who value hurting others for its own sake.

Empirically, almost or actually no mass-shooters (or serial killers) have this kind of abstract and scope-insensitive motivation. Look at this writeup of a DoJ study: it's almost always a specific combination of a violent and traumatic background, a short-term crisis period, and ready access to firearms.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-04-26T01:44:18.000Z · LW · GW

Similarly, the effect of the second dose might be to maintain the high initial effectiveness for a longer period of time, by "reminding" your immune system not to relax too soon.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Daniel Kokotajlo's Shortform · 2021-04-25T12:16:00.185Z · LW · GW

The IEA is a running joke in climate policy circles; they're transparently in favour of fossil fuels and their "forecasts" are motivated by political (or perhaps commercial, hard to untangle with oil) interests rather than any attempt at predictive accuracy.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Malicious non-state actors and AI safety · 2021-04-25T08:04:23.046Z · LW · GW

I refer you to Gwern's Terrorism Is Not About Terror:

Statistical analysis of terrorist groups’ longevity, aims, methods and successes reveal that groups are self-contradictory and self-sabotaging, generally ineffective; common stereotypes like terrorists being poor or ultra-skilled are false. Superficially appealing counter-examples are discussed and rejected. Data on motivations and the dissolution of terrorist groups are brought into play and the surprising conclusion reached: terrorism is a form of socialization or status-seeking.

and Terrorism Is Not Effective:

Terrorism is not about causing terror or casualties, but about other things. Evidence of this is the fact that, despite often considerable resources spent, most terrorists are incompetent, impulsive, prepare poorly for attacks, are inconsistent in planning, tend towards exotic & difficult forms of attack such as bombings, and in practice ineffective: the modal number of casualties per terrorist attack is near-zero, and global terrorist annual casualty have been a rounding error for decades. This is despite the fact that there are many examples of extremely destructive easily-performed potential acts of terrorism.

so any prospective murderer who was "malicious [and] willing to incur large personal costs to cause large amounts of suffering" would already have far better options than a mass shooting. Since we don't see them, I reject the "effective altruism hypothesis" and wouldn't bother worrying about maliciously non- or anti-aligned AI.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Naturalism and AI alignment · 2021-04-25T03:16:48.072Z · LW · GW

Assuming we completely solved the problem of making AI do what its instructor tells it to do

This seems to either (a) assume the whole technical alignment problem out of existence, or (b) claim that paperclippers are just fine.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on The Fall of Rome, II: Energy Problems? · 2021-04-24T03:30:10.499Z · LW · GW

Wikipedia has a page on Roman deforestation, which even uses the phrase "peak wood" - so depletion was definitely a concern (and Italy has never recovered the pre-Roman forests).

All that said, I think you're still underestimating the costs of transport!

  • Overland transport - i.e. wagons, and perhaps draft animals - is prohibitively expensive for anything more than a single-day journey of up to ~10 miles. Fuel is bulky, heavy, and frankly not that valuable.
  • Moving bulk freight by water is much better - whether floating logs down a river canadian-style, or loading boats for riverine or ocean transport. Even so, I don't know of any cases where fuel was transported or traded like this - and shipping enough grain into Rome was a constant and difficult problem.

Population density was much more even before the industrial revolution (smaller cities, more farmers, etc.), and it's reasonable to explain depletion solely in terms of local fuel for local use. If timber was valuable enough to transport, it was almost always as a material rather than fuel.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Two Designs · 2021-04-23T03:03:18.282Z · LW · GW

Important caveat for the pass-through approach: if any of your build_dataset() functions accept **kwargs, you have to be very careful about how they're handled to preserve the property that "calling a function with unused arguments is an error". It was a lot of work to clean this up in Matplotlib...

The general lesson is that "magic" interfaces which try to 'do what I mean' are nice to work with at the top-level, but it's a lot easier to reason about composing primitives if they're all super-strict.

Another example: $hypothesis write numpy.matmul produces code against a very strict (and composable) runtime API, but you probably don't want to look at how.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 4/15: Are We Seriously Doing This Again · 2021-04-16T05:35:06.623Z · LW · GW

I don't think we're actually disagreeing much about outcomes (which I agree have been great!), or even that Australia has competently executed at least enough of the important things to get right. Of the five items you mention I'd include borders, quarantine, snap-lockdowns, and testing as part of the local elimination policy; we haven't done them perfectly but we have done them well enough.

I understand "using good epistemics to make decisions" to require that your decisions should be made based on a coherent understanding and cost-benefit analysis of the situation, even if both might change over time. "Merely" getting good outcomes doesn't count!

For example, we still encourage pointless handwashing and distancing while iffy on masks or ventilation - and because we got to zero transmission in other ways, that's OK. Similarly, it's true that Australia's slow vaccine rollout hasn't cost many lives so far and I hope that neither winter nor variants change that. The cost-in-expectation of an unlikely outbreak should still drive faster vaccination efforts IMO, especially when e.g. increasing local production is not zero-sum.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Are there opportunities for small investors unavailable to big ones? · 2021-04-16T02:41:52.965Z · LW · GW

Some factors which I think are both important and missing from your model:

  • Risk. You probably cannot convince me that, in a liquid market, your outperforming trading strategy does not round to "picking up pennies in front of a steamroller".
  • Availability of capital. If you have to lock up $10K for a year per 20-hours-of-research deal, you're probably more constrained by money than time.
  • Opportunity costs. If you have sufficient quant and business skills to make money trading, you can probably make more working somewhere and investing the proceeds in index funds.
  • Transaction costs, taxes, etc.
Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Are there opportunities for small investors unavailable to big ones? · 2021-04-16T02:18:16.701Z · LW · GW

On the other hand, there's some suggestive evidence that seed-stage returns have a power-law distribution with - implying that the best strategy is to filter out the obvious duds and then invest in literally everything else.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 4/15: Are We Seriously Doing This Again · 2021-04-16T02:10:34.839Z · LW · GW

Worldwide demand should be easily big enough to justify [subcontracting manufacturing]

If it was legal to sell vaccines for the market price, or anywhere near their actual value, of course. Thanks to monopsony purchasers (i.e. irrationally cheap governments), we instead see massive underproduction.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 4/15: Are We Seriously Doing This Again · 2021-04-16T02:04:25.422Z · LW · GW

The hypothesis that Australia succeeded because it was using good epistemics to make decisions is not holding up well in the endgame.

From Australia, this hypothesis was only ever plausible if you looked at high-level outcomes rather than the actual decision-making.

We got basically one thing right: pursue local elimination. Without going into details, this only happened because the Victorian state government unilaterally held their hard lockdown all the way back to nothing-for-two-weeks, ending our winter second wave. Doing so created both a status quo and (having paid higher-than-if-faster costs) a very strong constituency for elimination.

Victoria remains the only area with non-neglibible masking. Nationwide, we continue to make expensive and obvious mistakes about handwashing, distancing, quarantine, and appear to be bungling our vaccine rollout.

Zero active cases and zero local transmission covers a multitude of sins. I attribute the result as much to good luck as epistemic skill, and am very glad that COVID is not such a hard problem that we can't afford mistakes.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Raemon's Shortform · 2021-04-15T03:54:32.302Z · LW · GW

Important for what? Best for what?

In a given (sub)field, the highest-cited papers tend to be those which introduced or substantially improved on a key idea/result/concept; so they're important in that sense. If you're looking for the best introduction though that will often be a textbook, and there might be important caveats or limitations in a later and less-cited paper.

I've also had a problem where a few highly cited papers propose $approach, many papers apply or puport to extend it, and then eventually someone does a well-powered study checking whether $approach actually works. Either way that's an important paper, but they tend to be under-cited either because either the results are "obvious" (and usually a small effect) or the field of $approach studies shrinks considerably.

It's an extremely goodhartable metric but perhaps the best we have for papers; for authors I tend to ask "does this person have good taste in problems (important+tractable), and are their methods appropriate to the task?".

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What weird beliefs do you have? · 2021-04-15T03:42:43.071Z · LW · GW

Excellent question!

I'm not personally concerned about what Bostrom called 'risks of irrationality and error' or 'risks to valuable states and activities'. There are costs of rationality though, where knowing just a little can expose you to harms that you're not yet equipped to handle (classic examples: scope sensitivity, demandingness, death). This rounds to common sense - 'be sensitive about when/whether/how to discuss upsetting topics'.

Mostly though, I'm inclined to keep quiet about data, idea, and attention hazards where my teenage self might have wanted to share interesting ideas like the antibiotic-gradient trick, at least without some benefit beyond having a fun discussion. Threat models for election security, yes - there's a clear public interest in everyone understanding the tradeoffs involved in paper vs electronic ballots, or remote vs polling-place voting. Ideas for asymmetric warfare, not so much.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What weird beliefs do you have? · 2021-04-14T11:42:14.543Z · LW · GW

At a more concrete level, I've spent the last ~14 months holding strong and unusual views on most pandemic-related matters, though I don't think any of them would raise eyebrows on LessWrong. A minority are probably now mainstream, the others - unfortunately - remain weird.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What weird beliefs do you have? · 2021-04-14T11:39:46.673Z · LW · GW

Taking information hazards seriously.

This can range from the benign (is it a good idea to post very weird beliefs here?) to the more worrying (plausible attacks on $insert_important_system_here), and upwards.