In Defence of Spock 2021-04-21T21:34:04.206Z
Zac Hatfield Dodds's Shortform 2021-03-09T02:39:33.481Z


Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 9/9: Passing the Peak · 2021-09-10T13:42:02.687Z · LW · GW

The other thing this misses about Australia is how catastrophically incompetent the federal government has been. If the Federal government had actually bought some bleeping vaccines when offered them by Pfizer in June 2020 (i.e. the deal that Israel took), we'd look awesome even with all their other stuffups.

If you want to get vaccinated, there's currently a two-month waiting list

Canberra, where I live, went more than a year between covid cases. Yes, international travel was difficult (more than necessary for public health); but having literally zero covid was pretty great. Most state governments were fine; it just only takes one to stuff up the NPIs and a single federal government to stuff up vaccines.

In April, I said

From Australia, the hypothesis [that Australia succeeded because it was using good epistemics] was only ever plausible if you looked at high-level outcomes rather than the actual decision-making. ... We got basically one thing right: pursue local elimination. This only happened because the Victorian state government unilaterally held their hard lockdown all the way back to nothing-for-two-weeks. ... we continue to make expensive and obvious mistakes about handwashing, distancing, quarantine, and appear to be bungling our vaccine rollout. Zero active cases and zero local transmission covers a multitude of sins.

And in July: "I am so tired of this. Please don't attribute Australia's success to consistently good epistemology; we just did enough right early to locally eliminate it at higher than necessary cost. We got lucky with the virus, we got some lucky policies, and I can only hope our luck hasn't yet run out."

So yeah; Australia is not systematically competent - we just got a combination of patchy competence and luck which worked really well for a while, because zero cases and controlled travel is a pretty stable equilibrium (my preferred one, even). Learn from our example that elimination is possible and practical... and perhaps also that vaccines would be really helpful.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on adamShimi's Shortform · 2021-09-10T08:36:31.600Z · LW · GW

You will also enjoy Stargate Physics 101.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What could small scale disasters from AI look like? · 2021-09-01T04:41:24.282Z · LW · GW

Such scenarios are at best smoke, not fire alarms.

When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.

What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable. ...

There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on An Apprentice Experiment in Python Programming, Part 4 · 2021-08-31T13:07:49.701Z · LW · GW

Nice! I always enjoy reading these logs :-)

Python objects are scattered all over the place [on the heap] ... performance degradation is the price for Python's simple memory model. ... NumPy is optimized for making use of blocks of contiguous memory.

Numpy also has the enormous advantage of implementing all the numeric operators in C (or Fortran, or occasionally assembly. (If you want hardware accelerators, interop is a promising work in progress)

You can substantially reduce memory fragmentation and GC pressure with only the standard library array module and memoryview builtin type, if your data suits that pattern. This is particularly useful to implement zero-copy algorithms for IO processing; as soon as the buffer is in memory anywere you just take pointers to slices rather than creating new objects.

JIT implementations of Python (PyPy, Pyjion, etc) are also usually pretty good at reducing the perf impact of Python's memory model, at least if your program is reasonably sensible about what and when it allocates.

With progn and :=, it's possible to combine multiple statements into one, so effectively create a lambda with multiple statements.

Sounds like you're partway to updating for Python 3!

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 8/26: Full Vaccine Approval · 2021-08-27T01:39:34.959Z · LW · GW

So, how do you think Australia did, all things considered?

I think it's worth considering the federal and state governments separately.

  • The federal government has been incredibly incompetent on fundamental issues like vaccinations and refusing to do anything about border quarantine.
  • State governments - the ones responsible for NPI policy like lockdowns - were far slower than optimal last year, but with the critical exception of NSW doing pretty well this year.
  • Even with delta, every state and territory has seen an outbreak and squashed it within three weeks by fast, early action. This is far cheaper than an ongoing epidemic.

The goal of zero transmission was and is clearly the best equilibrium to be in; we knew this early last year. Lockdowns suck but they work, and a fast+hard lockdown sucks a lot less than an epidemic. We went more than a year in Canberra between cases! The system works, and to be blunt if it was applied consistently COVID could have been eradicated by mid-2020... not that it could have been applied, but still.

Just keep !

Overall grade: C-, for an approach which was sufficient until betrayed by the combination of one particularly incompetent state and a federal government which turned down early access to Pfizer vaccines. D+ for policy, B for outcomes - we were adequate enough that good luck could make a difference.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Better Password Peppering · 2021-08-24T14:59:59.967Z · LW · GW

The core problem is that using a decent password hashing algorithm with non-secret per-service per-user salts gives you really, really good security with respect to salts.

Adding secret keys (salts, whatever) to your security assumptions is the kind of thing that gives cryptographers an allergic reaction - and I think you're seriously underestimating the difficulty of implementing the system too, for very little gain. Better to spend your time protecting users from social engineering attacks, or insider compromise, or any of the myriad other attacks which are far more likely to defeat the sensible conventional approach.

Comment by zac-hatfield-dodds on [deleted post] 2021-08-23T08:19:16.844Z

What is the terminology that would stop me from being confused here?

I think your confusion might be from treating "human-level AI" as if it were a natural category, i.e. labelling a cluster in thingspace.

Instead, it's a speculative - and some would say misleading - term used to refer to a system with "human level" capabilities (better than the worst human? A particular human? The best human at each task? etc etc). So I think that the answers to all your more specific questions are actually hiding inside the very vague definition of 'HL-AI'.

Indeed if you say that "AI-Fred" is exactly like Fred then, by hypothesis, there's not much impact - but this tells you nothing about the real world; it's just a tautology about your definition of "human level". (it also strikes me as staggeringly unlikely; ML or other computer systems are already superhuman at many interesting tasks)

What is a better/more useful way of asking this question?

Be very specific and concrete about the behaviour and capabilities of the system(s) you want to reason about: what are their inputs, action-spaces, computational structures? Who trains or builds them, for what purpose? Etc.

I wouldn't expect any useful answers, but we can at least aspire to ask well-defined questions.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Is top-down veganism unethical? · 2021-08-23T00:52:25.380Z · LW · GW

Also related, Team Tyler's Van mentions an ongoing project to breed cows that don't have 『Qualia』.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What are some beautiful, rationalist sounds? · 2021-08-22T04:16:27.819Z · LW · GW

Maths songs: Finite Simple Group of Order Two, Derive Me Maybe, the Mathematical Pirate's Song (and everything else from that campaign was great)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Misguided? Callous? Just Plain Stupid? · 2021-08-20T08:15:22.647Z · LW · GW

For the sake of readability, I have referred to misguidedness, callousness, and stupidity as type one, two, and three traits respectively

For what it's worth I find descriptive terms much easier to read than type one, two, etc. In statistics I even dislike "false positive/negative", and prefer the more descriptive "false/missed alarm/label/...".

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ryan_b's Shortform · 2021-08-20T04:43:35.297Z · LW · GW

This is how "artisanal" small research labs work, but larger research groups usually have more specialisation - especially in industry, which happens to have almost all the large-scale research groups. People might specialise in statistics and data analysis, experimental design, lab work, research software engineering, etc. Bell Labs did not operate on the PI model, for example; nor does DARPA or Google or ...

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on An Apprentice Experiment in Python Programming, Part 3 · 2021-08-17T01:45:58.019Z · LW · GW

The skills of 'working on an existing project' I mentioned above are not usually covered as part of a CS education, but complementary skills for most things you might want to do once you have one. I also agree entirely with gjm; you'll learn a lot any time you get hands-on practice with close feedback from a mentor.

For OSS libraries, those pytest issues would be a great start. Scientific computing varies substantially by domain - largely with the associated data structures, being some combination of large arrays, sequences, or graphs. Tools like Numpy, Scipy, Dask, Pandas, or Xarray are close to universal though, and their developers are also very friendly.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Why must plausibilities be represented using real numbers? · 2021-08-16T22:54:18.487Z · LW · GW

(See amplitude if you want to look at the quantum generalisation of probability from to )

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on An Apprentice Experiment in Python Programming, Part 3 · 2021-08-16T08:23:21.589Z · LW · GW

Leaning in to current confusions on e.g. decorators makes sense :-)

To ask a slightly different question - what kind of thing do you want to do with Python? It's a large and flexible language, and you'd be best served focussing on somewhat different topics depending on whether you want to use Python for {scientific computing, executable psudeocode, web dev, async stuff, OSS libraries, ML research, desktop apps, etc}.

I'll also make the usual LW suggestion of learning from a good textbook - Fluent Python is the usual intermediate-to-advanced suggestion. After than I learned mostly by following, and contributing to, various open source projects - the open logs and design documents are an amazing resource, as is feedback from library maintainers.

For open-source contributions, you should expect most of the learning curve for your first few patches to be about the contribution process, culture, and tools, and just navigating a large and unfamilar codebase. These are very useful skills though! If you need someone to help get you unstuck, I'm on the Pytest core team and would be happy to help you (or another LWer) with #3426 or #8820 if you're interested.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on An Apprentice Experiment in Python Programming, Part 3 · 2021-08-16T05:58:12.275Z · LW · GW

I appreciate reading these logs, but I'm also a little confused about what you're aiming to learn!

I personally work with decorators, higher-order functions, and complicated introspection all the time... but that's because I'm building testing tools like Hypothesis (and to a lesser extent Pytest). If that's the kind of project you want to work on I'd be happy to point you at some nice issues, but it's definitely a niche.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Chasing Infinities · 2021-08-16T02:01:41.360Z · LW · GW

I object to the notion of a pure rate of social time preference, on the basis that it's epiphenomenal. People and organisations do in fact have time preferences, but IMO they are far better explained by a combination of factors such as:

  • Opportunity costs, especially compared to compounding returns on the earlier option
  • Probability of payoff decreasing over time for a wide variety of reasons
  • Preferences that are simply incoherent, i.e. violating VNM-rationality and subject to dutch-booking

I have never seen an example of pure-time-preference - outside of a thought experiment - which couldn't more naturally be explained in terms of impure time preference. For example:

  • I would prefer a more severe disease (much) later, if I anticipate only a small chance of living that long
  • I would prefer less money now to more later, if I don't trust you to deliver later
  • I would prefer less money now to more later, if I can make more by investing the earlier payoff
  • I would choose ten lives today over one million lives in 500 years because I don't believe the latter claim is credible
Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on A Qualitative and Intuitive Explanation of Expected Value · 2021-08-10T10:00:34.935Z · LW · GW

This is why the currency of Sophia switched from gold to benthamite - instead of dealing with tricky nonlinearities, you just transact in coins made of pure utility.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What are some beautiful, rationalist sounds? · 2021-08-06T03:47:58.948Z · LW · GW

Sentries is dedicated to a hypothetical asteroid watch:

Humans are hotheads, who break the rules...
Humans are hotheads, but not quite fools—
Therefore we fly, keeping an eye Turned to the depths of the borderless sky.

Some of us people, the rest machines —
Sensors, computers, and read-out screens —
Always aware, with infinite care,
That we're the first warning if anything's there

(per Toby Ord it's probably better not to build tech that can redirect large impactors, but looking for them is a good idea)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What are some beautiful, rationalist sounds? · 2021-08-06T03:13:53.953Z · LW · GW

Tim Blais aka ACapella Science has a long list of science songs, which are musically as well as scientifically lovely.

Emergence of complex life from fundamental physics? Listen to Molecular Shape of You then Nanobot then Evo-Devo. CMB? Hamiltonians? There's a song for each, and several for gravity.

My only 'complaint' is that I remember these tracks better than the originals, which complicate singalongs.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Toon Alfrink's sketchpad · 2021-08-06T02:48:35.708Z · LW · GW

It sounds like your argument would also favor wireheading, which I think the community mostly rejects.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What made the UK COVID-19 case count drop? · 2021-08-02T13:30:18.006Z · LW · GW

Based on this article, I'd guess mostly reduced testing following "freedom day":

So, what could be behind the drop in new infections, and is it too early to be celebrating? The Office of National Statistics (ONS) weekly infection survey ... showed the estimated prevalence of infections in England had actually risen from 1 in 75 people to 1 in 65 in the week to July 24.

The ONS survey aims to estimate infection numbers in the community beyond people who have been tested, and gives an estimate of COVID-19's prevalence that is unaffected by fluctuations in people putting their hands up to be tested.

The good news in this is that with the ~88% vaccination rate reducing the death rate in vulnerable and elderly populations, there won't be anywhere near as many casualties as a 'let it rip' approach to herd immunity would have seen before. And at some point the UK will run out of unvaccinated uninfected people; hopefully before sharing another variant with the rest of us -_-.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Destroying Insecurity and Boosting Confidence Through Your Interests and Values · 2021-08-01T14:40:43.652Z · LW · GW

Reminds me of Being the (Pareto) Best in the World.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on An Apprentice Experiment in Python Programming, Part 2 · 2021-07-29T08:48:01.721Z · LW · GW

(using a lambda as a decorator requires Python 3.9 or later, for anyone wondering what's going on here)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-07-29T02:43:00.768Z · LW · GW

As far as I know none of our leaks have been by releasing an infectious person after a negative test result.

It's possible for PCR tests to return negative for a very early (low viral load) infection though; that's why for high-risk travellers we do PCR tests on days -3, 1, 5, 11, and 14 of the quarantine period. For low-risk settings, ie contact tracing, you only need to isolate until you get a negative PCR test result.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-07-28T06:21:58.045Z · LW · GW

I think, until recently, basically no infectious cases made it through [Australia and New Zealand's] 2-week quarantines.

In Australia, hotel quarantine has caused one outbreak per 204 infected travellers. Purpose-built facilities are far better, but we only have one (Howard Springs, near Darwin) and the federal government has to date refused to build any more.

Our current Delta outbreaks are tracable to a limo driver who was not - nor required to be - vaccinated or even masked while transferring travellers from their flight to hotel quarantine.

The main source of our success has been in massively cuts to the number of travellers we allow in, and that has it's own obvious problems...

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Sherrinford's Shortform · 2021-07-27T23:55:19.998Z · LW · GW

Kids, location, finances, and health are all extraordinarily high-leverage to think about - at least if you act on your plans.

Personally I'd start with personal finance, mostly because it should be pretty quick and simple to sort out (not always easy, to stick to, but simple). The personalfinance reddit has good flowcharts to follow, and I wrote a list of investing resources here if you want more detail than "buy index funds and get on with the rest of your life".

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on garbageactual's Shortform · 2021-07-27T23:45:26.609Z · LW · GW

Or sufficiently analysed magic.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Summary and Notes from 'The Signal and the Noise' · 2021-07-19T06:32:35.780Z · LW · GW

Forecasters deliberately overstate the probablity of rain, following the apparent user preferences. Most people are poorly calibrated to the point of only explicitly noticing "rained without prediction", and the cost asymmetry points in the same direction.

Making things more complicated is that in many cities it can be raining in one suburb and dry in another, and accurately communicating such spatial heterogeneity is almost as difficult as forecasting it.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [AN #156]: The scaling hypothesis: a plan for building AGI · 2021-07-17T03:30:01.587Z · LW · GW

You might switch from building 'career capital' and useful skills to working directly on prosaic alignment, if you now consider it plausible that "attention is all you need for AGI".

Before OpenAI's various models, prosaic alignment looked more like an important test run / field-building exercise so we'd be well placed to shape the next AI/ML paradigm around something like MIRI's Agent Foundations work. Now it looks like prosaic alignment might be the only kind we get, and the deadline might be very early indeed.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on MikkW's Shortform · 2021-07-14T06:35:44.782Z · LW · GW

Ah, that position makes a lot of sense. Here's why I'm still in boring market-cap-indices rather than high-growth tech companies:

  • I think public equity markets are weakly inexploitable - i.e. I expect tech stocks to outperform but not that the expected value is much larger than a diversified index
  • Incumbents often fail to capture the value of new trends, especially in tech. The sector can strongly outperform without current companies doing particularly well.
  • Boring considerations about investment size, transaction fees, value of my time to stay on top of active trading, etc.
  • Diversification. Mostly that as a CS PhD my future income is already pretty closely related to tech performance; with a dash of the standard arguments for passive indices.

And then I take my asymetric bets elsewhere, e.g. starting HypoFuzz (business plan).

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on MikkW's Shortform · 2021-07-13T23:23:33.113Z · LW · GW

What specifically do you think has a 26% expected ARR, while also being low-risk or diversified enough to hold 90% of your investable wealth? That's a much more aggressive allocation to e.g. the entire crypto-and-adjacent ecosystem than I'm comfortable with.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on How much chess engine progress is about adapting to bigger computers? · 2021-07-09T05:37:50.203Z · LW · GW

A Time-Leap Challenge for SAT Solving seems related:

We compare the impact of hardware advancement and algorithm advancement for SAT-solving over the last two decades. In particular, we compare 20-year-old SAT-solvers on new computer hardware with modern SAT-solvers on 20-year-old hardware. Our findings show that the progress on the algorithmic side has at least as much impact as the progress on the hardware side.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 7/8: Delta Takes Over · 2021-07-09T02:04:27.999Z · LW · GW

Our outcomes have been mostly good.

My point is precisely that you should not infer from this that our policies have been mostly good; instead they have been barely adequate to the task. Fortunately for Australia, if you maintain zero spread and crack down hard and early (on single-digit cases), the other details really don't matter so much. What we're seeing now is what happens when you don't crack down so early...

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Looking for Collaborators for an AGI Research Project · 2021-07-09T02:00:04.823Z · LW · GW

How do you intend to avoid creating very many conscious and suffering people?

(and have you read Crystal Nights?)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 7/8: Delta Takes Over · 2021-07-08T14:59:01.197Z · LW · GW

In Australia:

  • the Delta outbreak in Sydney continues to grow (slowly) despite lockdown; state government contemplates giving up after two weeks of half measures
  • other states and general outrage - especially from Victoria, which eliminated our second wave with a four-month lockdown - forced a retraction pretty quickly. (FWIW Victoria waited much longer than it should have in the second wave; subsequently we've been fine with 3-10 day lockdowns when cases are detected. It's just the NSW government peddling the usual rubbish about business vs health trade-offs)
  • Prime Minister declares that meeting vaccination targets wouldn't have helped avoid this outbreak. Personally I think having 50+ instead of 10% of the population vaccinated would in fact reduce , and that vaccinating the driver who passed it out of quarantine might also have been sufficient. Or even mandating that drivers transporting people to quarantine should wear a mask!

I am so tired of this. Please don't attribute Australia's success to consistently good epistemology; we just did enough right early to locally eliminate it at higher than necessary cost. We got lucky with the virus, we got some lucky policies, and I can only hope our luck hasn't yet run out.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on The Apprentice Thread · 2021-07-05T09:13:01.022Z · LW · GW

To get feedback on your existing code, try using an autoformatter, a linter, and a type-checker (I can recommend specifics for Python, but not Ruby). For more general feedback, I found that contributing bugfixes and later features to open-source projects that I already used taught me an enormous amount; if you make an effort to respect maintainers and their time they're almost always amazingly helpful and knowledgeable people.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on The Apprentice Thread · 2021-07-05T09:08:40.173Z · LW · GW

To understand investing, at a high level, read

  • If you Can: how millenials can get rich slowly (pdf, 16 pages). A complete and simple introduction to passive-index investing, i.e. the slow but reliable path.
  • Inadequate Equillibria (online or printed). Distinguishing "efficient" from "adequate" from "exploitable" is essential if you want to come out ahead when investing.
  • Some books by Nassim Taleb; I'd suggest Fooled by Randomness then Antifragile with the technical Statistical Consequences of Fat Tails as a companion. With some work they're educational as well as entertaining.
  • Dip into Matt Levine's Money Stuff newsletter, at least enough to get a sense of what happens in large-scale finance.
  • Stripe's Atlas guides are great resources for small or new internet businesses. See also Patrick McKenzie's microconf talks and website.
Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Covid 7/1: Don’t Panic · 2021-07-02T15:09:06.060Z · LW · GW

I think we can lay to rest the hypothesis that Australia did better than other countries because it was more sane and has wiser systems for making decisions. Australia did better for other reasons, including being an island, that led to a different equilibrium. Now that we are in the vaccination phase of the pandemic, Australia is utterly failing.

As I noted on the April 15 post,

From Australia, this hypothesis [Australia succeeded because it was using good epistemics to make decisions] was only ever plausible if you looked at high-level outcomes rather than the actual decision-making. We got basically one thing right: pursue local elimination. ... Nationwide, we continue to make expensive and obvious mistakes about handwashing, distancing, quarantine, and appear to be bungling our vaccine rollout.

Zero active cases and zero local transmission covers a multitude of sins. I attribute the result as much to good luck as epistemic skill, and am very glad that COVID is not such a hard problem that we can't afford mistakes.

Unfortunately it turns out that our national vaccine rollout has been comprehensively bungled, including turning down Pfizer when they approached us in July last year. Hopefully voters can correctly attribute responsibility for our latest round of lockdowns to the current federal government.

For all that though, and while the Delta strain spreads considerably faster, my impression is that we still largely have it under control - just at significantly greater cost via track/trace/lockdown rather than vaccinating everyone and moving on. I'm hoping I'll be able to travel next year, now...

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on What will be the aftermath of the US intelligence lab leak report? · 2021-06-26T23:32:36.693Z · LW · GW

To capture the burdensome detail of this scenario, can you unpack your probability estimates for each step - conditional on the previous - and any reasoning?

Using made-up numbers for an example:

  1. The report will be issued by the end of August (95%) [usually I wouldn't worry much about this, but it's crucial for trading decisions!]
  2. The report will present a definite conclusion (40%) [I feel this is a high estimate, but expresses my ignorance]
  3. The conclusion will be "lab leak" (80%) [leaning heavily on the conditional on earlier steps, hard to honestly and definitely rule out]
  4. The virology community will be seen to have misled the public (33%) [when in this whole pandemic has epistemic vice been publicly recognised or even understood? But blame games are easier]
  5. Fauci will (be forced to) resign (50%)
  6. A senate committee will question Fauci about gain-of-function research (75%) [maybe 30% even without all the conjunctions]
  7. ...and subpoena Twitter and Facebook re censorship of the lab leak hypothesis (20%) [another conjunction, and kinda off-topic from virology]
  8. Causing a crisis of faith in the medical establishment which will drive structural reform (1%) [specific reforms to GoF research seem likely, but I'd be (happily) shocked by reforms that address the underlying problems]

And independently,

  1. A new variant appears, in 2021 (80%)
  2. which is best addressed by a third or additional or different vaccine dose (30%) [seems hard to approve, then produce and distribute at scale in the relevant time frame, but vaccines are great]

This is just way way way too many conjunctions. I get , a little less than one percent, without the structural reform clause or new variant - but I'd love to see your numbers.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [AN #152]: How we’ve overestimated few-shot learning capabilities · 2021-06-17T10:30:39.014Z · LW · GW

Testing with respect to learned models sounds great, and I expect there's lots of interesting GAN-like work to be done in online adversarial test generation.

IMO there are usefully testable safety invariants too, but mostly at the implementation level rather than system behaviour - for example "every number in this layer should always be finite". It's not the case that this implies safety, but a violation implies that the system is not behaving as expected and therefore may be unsafe.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on [AN #152]: How we’ve overestimated few-shot learning capabilities · 2021-06-16T23:10:28.671Z · LW · GW

High Impact Careers in Formal Verification: Artificial Intelligence

My research focuses on advanced testing and fuzzing tools, which are so much easier to use that people actually use them - eg in Pytorch, and I understand in Deepmind. If people seem interested I could write up a post on relevance to AI safety in a few weeks.

Core idea: even without proofs, writing out safety properties or other system invariants in code is valuable both (a) for deconfusion, and (b) because we can have a computer search for counterexamples using a variety of heuristics and feedbacks. At the current margin this tends to improve team productivity and shift ML culture towards valuing specifications, which may be a good thing for AI x-risk.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-11T08:48:13.931Z · LW · GW

No, I think we mostly agree - I'd expect TPUs to be with say 4x of practically optimal for the things they do. The remaining one OOM I think is possible for non-novel tasks has more to do with specialisation, eg model-specific hardware design, and that definitely has an asymtote.

The interesting case is if we can get TPU-equivalent hardware days after designing a new architecture, instead of years after, because (IMO) 1,000x speedups over CPUs are plausible.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-10T14:16:25.024Z · LW · GW

Yes, that's a fair summary - though in "not hard ... if you design custom hardware" the second clause is doing a lot of work.

As to the magnitude of improvement, really good linear algebra libraries are ~1.5x faster than 'just' good ones, GPUs are a 5x-10x improvement on CPUs for deep learning, and TPUs 15x-30x over Google's previous CPU/GPU combination (this 2018 post is a good resource). So we've already seen 100x-400x improvement on ML workloads by moving naive CPU code to good but not hyper-specialised ASICs.

Truly application-specific hardware is a very wide reference class, but I think it's reasonable to expect equivalent speedups for future applications. If we're starting with something well-suited to existing accelerators like GPUs or TPUs, there's less room for improvement; on the other hand TPUs are designed to support a variety of network architectures and fully customised non-reprogrammable silicon can be 100x faster or more... it's just terribly impractical due to the costs and latency of design and production with current technology.

For example, with custom hardware you can do bubblesort in time, by adding a compare-and-swap unit between the memory for each element. Or with a 2D grid of these, you can pipeline your operations and sort lists in time and latency! Matching the logical structure of your chip to the dataflow of your program is beyond the scope of this article (which is "just" physical structure), but also almost absurdly powerful.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-10T11:53:09.974Z · LW · GW

Circuit design is the main bottleneck for use of field-programmable gate arrays. If fully-automated designs become good enough, we could see substantial gains from having optimising compilers output a gate layout rather than machine code for an xPU or specific accelerator. We already have some such compilers, and this looks like a meaningful step towards handling non-toy-scale problems with them.

The main change here wouldn't be so much training speed - we already have TPUs etc. to accelerate current workloads, and fabricating a new design as ASICs rather than FPGA layouts takes months-to-years at scale - but rather the latency with which we can try out custom hardware for novel ML paradigms such as transformers. What is to transformers as TPUs are to CNNs? Specifically for novel tasks, this could be a 10x-1000x speedup, and 2x-50x speedup for existing workloads... though I understand they're bottlenecked more on data movement between nodes than compute.

TLDR: a small step in a high-long-term-impact trend.

(Source: while I'm not a hardware specialist, I've worked with the PyMTL team at Cornell on verification and validation of their Python-to-Verilog-to-silicon hardware design tools, followed high-level developments in custom compute hardware for around a decade, and worked on peta-scale supercomputing for a few years.)

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Changing my life in 2021, halfway through · 2021-06-10T05:35:08.743Z · LW · GW

I know very little about good personal financial management other than that ideally revenue > expenses. If you found any source for learning about personal finance useful please post it.

For day-to-day personal finance, "disposable income > expenses" is sufficient - automate payments to long-term savings, rent, etc; and then spend the balance as you will. Some people get a lot of value out of detailed budgeting techniques or tools, but IMO that's mostly personal preference.

The best short introduction to personal finance for the long term is William J Bernstein's If You Can: how millenials can get rich slowly (pdf). It's only sixteen pages long, with recommended follow-up reading and actions for your second pass through.

Before considering any departure from the conventional wisdom of low-fee diversified index funds, you should also read Inadequate Equilibria and some of Taleb (I usually suggest Fooled by Randomness and Antifragile).

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-10T05:34:46.970Z · LW · GW

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Let's use Toby Ord's categorisation - and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:

  • nuclear war - avoids accidental/fast escalation; unlikely to help in deliberate war
  • extreme climate change or environmental damage - avoids this risk entirely
  • engineered pandemics - strong mitigation
  • unaligned artificial intelligence - lol nope.
  • dystopian scenarios - unlikely to help

So Mars colonisation handles about half of these risks, and maybe 1/4 of the total magnitude of risks. It's a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Maximizing Yield on US Dollar Pegged Coins · 2021-06-08T05:21:43.336Z · LW · GW

You are picking up pennies in front of a steamroller.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-08T05:17:46.033Z · LW · GW

He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink.

Both of these examples betray an extremely naive understanding of AI risk.

  • OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone - probably someone in a hurry - getting a decisive strategic advantage.
  • Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea, etc. I'm also unenthusiastic on technical grounds.
  • SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)

So I'd attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Restoration of energy homeostasis by SIRT6 extends healthy lifespan · 2021-06-05T23:03:47.942Z · LW · GW


(I know that's the title of the Nature paper, and kudos for stating "in mice" more prominently in the post body than the paper did, but IMO it's worth appending to the title.)

While most SIRT1 knockout mice die perinatally, in a few weeks age, 129svJ background SIRT6 knockout mice exhibit severe developmental defects but survive to about 4 weeks of age. Similarly, in humans and primates, mutations resulting in SIRT6 inactivation result in prenatal or perinatal lethality accompanied by severe developmental brain defects.

This is maybe interesting as a suggestion of which pathways to investigate for aging-related loss of cellular energy homeostasis, but it's not even plausible that it could be therapeutic in humans.

Comment by Zac Hatfield Dodds (zac-hatfield-dodds) on Donating Bitcoin to Crisis Zones - Is there a platform collating and verifying public key address for individuals in conflict zones which allows donors to send Bitcoin directly to them? · 2021-05-27T14:12:58.052Z · LW · GW

the idea of donating directly to people in need is very attractive

GiveDirectly are world-class experts in efficiently transferring money to people in extreme poverty who need it most, including validation and ensuring that it arrives in a useful (i.e. spendable) form. accepts Bitcoin, Ethereum, and even Dogecoin.