Posts

We have some evidence that masks work 2021-07-11T18:36:46.942Z
Self-help, hard and soft 2020-06-07T15:39:29.746Z
Automatic for the people 2018-07-08T14:23:08.787Z

Comments

Comment by technicalities on The Best Software For Every Need · 2021-09-18T09:55:15.382Z · LW · GW

Ooh that's more intense that I realised. There might be plugins for yEd, but I don't know em. Maybe Tetrad?

Comment by technicalities on The Best Software For Every Need · 2021-09-16T07:59:14.059Z · LW · GW

I love Sketchviz for 10 second prototypes, but it requires the DOT language, and if you need very specific label placements it's a nightmare.

For using a mouse, yEd is good. Exports to GraphML for version control.

Comment by technicalities on We have some evidence that masks work · 2021-09-02T07:34:48.027Z · LW · GW

Givewell's fine! 

Thanks again for caring about this.

Comment by technicalities on We have some evidence that masks work · 2021-07-31T09:47:24.207Z · LW · GW

Sounds fine. Just noticed they have a cloth and a surgical treatment. Take the mean?

Comment by technicalities on We have some evidence that masks work · 2021-07-30T15:52:49.972Z · LW · GW

Great! Comment below if you like this wording and this can be our bond:

"Gavin bets 100 USD to GiveWell, to Mike's 100 USD to GiveWell that the results of NCT04630054 will show a median reduction in Rt > 15.0 % for the effect of a whole population wearing masks [in whatever venues the trial chose to study]."

Comment by technicalities on Fire Law Incentives · 2021-07-22T20:16:37.079Z · LW · GW

This is an interesting counterpoint (though I'd like to see a model of CO2 cost vs thinning cost if you have one), and it's funny we happen to have such a qualified person on the thread. But your manner is needlessly condescending and - around here - brandishing credentials as a club will seriously undermine you rather than buttressing you. 

Comment by technicalities on We have some evidence that masks work · 2021-07-19T16:12:44.205Z · LW · GW

Sadly Gelman didn't have time to destroy us. (He rarely does.)

Comment by technicalities on Critiques of the Agent Foundations agenda? · 2020-12-04T09:26:47.951Z · LW · GW

Stretching the definition of 'substantial' further:

Beth Zero was an ML researcher and Sneerclubber with some things to say. Her blog is down unfortunately but here's her collection of critical people. Here's a flavour of her thoughtful Bulverism. Her post on the uselessness of Solomonoff induction and the dishonesty of pushing it as an answer outside of philosophy was pretty good.

Sadly most of it is against foom, against short timelines, against longtermism, rather than anything specific about the Garrabrant or Demski or Kosoy programmes.

Comment by technicalities on Critiques of the Agent Foundations agenda? · 2020-12-04T09:15:10.334Z · LW · GW

Nostalgebraist (2019) sees it as equivalent to solving large parts of philosophy: a noble but quixotic quest. (He also argues against short timelines but that's tangential here.)

Here is what this ends up looking like: a quest to solve, once and for all, some of the most basic problems of existing and acting among others who are doing the same. Problems like “can anyone ever fully trust anyone else, or their future self, for that matter?” In the case where the “agents” are humans or human groups, problems of this sort have been wrestled with for a long time using terms like “coordination problems” and “Goodhart’s Law”; they constitute much of the subject matter of political philosophy, economics, and game theory, among other fields.

The quest for “AI Alignment” covers all this material and much more. It cannot invoke specifics of human nature (or non-human nature, for that matter); it aims to solve not just the tragedies of human coexistence, but the universal tragedies of coexistence which, as a sad fact of pure reason, would befall anything that thinks or acts in anything that looks like a world.

It sounds misleadingly provincial to call such a quest “AI Alignment.” The quest exists because (roughly) a superhuman being is the hardest thing we can imagine “aligning,” and thus we can only imagine doing so by solving “Alignment” as a whole, once and forever, for all creatures in all logically possible worlds. (I am exaggerating a little in places here, but there is something true in this picture that I have not seen adequately talked about, and I want to paint a clear picture of it.)

There is no doubt something beautiful – and much raw intellectual appeal – in the quest for Alignment. It includes, of necessity, some of the most mind-bending facets of both mathematics and philosophy, and what is more, it has an emotional poignancy and human resonance rarely so close to the surface in those rarefied subjects. I certainly have no quarrel with the choice to devote some resources, the life’s work of some people, to this grand Problem of Problems. One imagines an Alignment monastery, carrying on the work for centuries. I am not sure I would expect them to ever succeed, much less to succeed in some specified timeframe, but in some way it would make me glad, even proud, to know they were there.

I do not feel any pressure to solve Alignment, the great Problem of Problems – that highest peak whose very lowest reaches Hobbes and Nash and Kolomogorov and Gödel and all the rest barely began to climb in all their labors...

#scott wants an aligned AI to save us from moloch; i think i'm saying that alignment would already be a solution to moloch

Comment by technicalities on Rationalists from the UK -- what are your thoughts on Dominic Cummings? · 2020-11-22T20:03:31.659Z · LW · GW

Huh, works for me. Anyway I'd rather not repeat his nasty slander but "They're [just] a sex cult" is the gist.

Comment by technicalities on Rationalists from the UK -- what are your thoughts on Dominic Cummings? · 2020-11-22T18:32:09.853Z · LW · GW

https://books.google.co.uk/books?id=OLB1DwAAQBAJ&q=sex cult&f=false

Comment by technicalities on Rationalists from the UK -- what are your thoughts on Dominic Cummings? · 2020-11-22T11:38:22.282Z · LW · GW

The received view of him is as just another heartless Conservative with an extra helping of tech fetishism and deceit. In reality he is an odd accelerationist just using the Tories (Ctrl+F "metastasising"). Despite him quoting Yudkowsky in that blog post, and it getting coverage in all the big papers, people don't really link him to LW or rationality, because those aren't legible, even in the country's chattering classes. We are fortunate that he is such a bad writer, so that no one reads his blog.

Here's a speculative rundown of things he probably got implemented (but we won't really know until 2050 declassification):

  • Doubling of the already large state R&D budget (by 2025). This will make the government half of all UK R&D spending. £800m ARPA like. £300m STEM funding already out.

  • Pushed the COVID science committee into an earlier lockdown. Lockdown sceptics / herd immunity types likely to gain influence now.

  • An uncapped immigration path for scientists

  • Tutoring in state schools

  • Data-driven reform of the civil service is incomplete and probably abortive. His remaining crew are "misfits", little influence. Associated data science, superforecasting and evidence-based policy with racists and edgelords. (One of those is on record as having a ridiculously negative view of LW.) Weirdo hiring scheme may mean Whitehall hiring even more staid in the short run.

  • Something something bullying, norms, deception, centralisation of power. Whipping the Treasury probably not a good precedent.

  • His hypocrisy probably weakened lockdown norms. This also wasted a huge amount of Boris Johnson's political capital during a public health crisis; I don't know how to evaluate that.

Comment by technicalities on Model Depth as Panacea and Obfuscator · 2020-11-09T09:27:05.881Z · LW · GW

Great post. Do you have a sense of

  1. how much of tree success can be explained / replicated by interpretable models;
  2. whether a similar analysis would work for neural nets?

You suggest that trees work so well because they let you charge ahead when you've misspecified your model. But in the biomedical/social domains ML is most often deployed, we are always misspecifying the model. Do you think your new GLM would offer similar idiotproofing?

Comment by technicalities on [deleted post] 2020-10-03T09:07:35.226Z

Yeah, the definition of evidence you use (that results must single out only one hypothesis) is quite strong, what people call "crucial" evidence.

https://en.m.wikipedia.org/wiki/Experimentum_crucis

Comment by technicalities on Are there good ways to find expert reviews of popular science books? · 2020-06-09T15:44:06.544Z · LW · GW

I suspect there is no general way. ): Even the academic reviews tend to cherry-pick one or two flaws and gesture at the rest.

Partial solutions:

  1. Invest the time to follow the minority of Goodreads users who know their stuff. (Link is people I follow.)
  2. See if Stuart Ritchie has reviewed it for money.
Comment by technicalities on Most reliable news sources? · 2020-06-06T21:29:50.204Z · LW · GW

The Economist ($) for non-Western events and live macroeconomics. They generally foreground the most important thing that happens every week, wherever it happens to occur. They pack the gist into a two page summary, "The World this Week". Their slant is pro-market pro-democracy pro-welfare pro-rights, rarely gets in the way. The obituaries are often extremely moving.

https://www.economist.com/the-world-this-week/

Comment by technicalities on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T09:27:09.039Z · LW · GW

Raised in the old guard, Chalmers doesn't understand...

This amused me, given that in the 90s he was considered an outsider and an upstart, coming round here with his cognitive science, shaking things up. (" 'The Conscious Mind' is a stimulating, provocative and agenda-setting demolition-job on the ideology of scientific materialism. It is also an erudite, urbane and surprisingly readable plea for a non-reductive functionalist account of mind. It poses some formidable challenges to the tenets of mainstream materialism and its cognitivist offshoots" )

Not saying you're wrong about him in that lecture. Maybe he has socialised and hardened as he gained standing. A funny cycle, in that case.

Comment by technicalities on What are objects that have made your life better? · 2020-05-22T07:10:34.254Z · LW · GW

I did a full accounting, including vague cost-benefit ranking:

https://www.gleech.org/stuff

Ignoring the free ones, which you should just go and get now, I think the best are:

  • Sweet Dreams Contoured sleep mask. Massively improved sleep quality, without having to alter the room, close the windows, whatever. 100:1.

  • Bowflex SelectTech dumbbells. A cheap gym membership is £150 a year; using these a couple times a week for 2 years means I’ve saved hundreds of pounds and dozens of hours commuting. They should last 15 years, so maybe total 30:1. (During the present lockdown, with gyms closed, the dumbbells get a temporary massive boost too.)

  • [Queal, a complete food powder] once a day. Saves money (if a lunch would otherwise be £4) and time and the delivery vector means I actually use the other powders I buy (spirulina, creatine, beta-alanine). Big discount for verifiable EAs. Also a handy automatic prepper store. 10:1.

  • Filco Majestouch 2 Tenkeyless mechanical keyboard. Assuming this decreases my RSI risk by 1%, it will have paid off 10 times over. But also in comfort and fun alone. 10:1

Comment by technicalities on What are the relative speeds of AI capabilities and AI safety? · 2020-04-24T22:20:12.205Z · LW · GW

Some more ways:

If it turns out that capabilities and safety are not so dichotomous, and so robustness / interpretability / safe exploration / maybe even impact regularisation get solved by the capabilities lot.

If early success with a date-competitive performance-competitive safety programme (e.g. IDA) puts capabilities research onto a safe path.

Comment by technicalities on The Samurai and the Daimyo: A Useful Dynamic? · 2020-04-14T09:56:03.030Z · LW · GW

My name for this Einsteins and Eddingtons.** Besides the vital testing and extension of the big ideas, the Eddington can also handle popularisation and, most important of all, the identification and nurturing of new Einsteins. This is one reason I think teaching in academia could be high-impact, despite all the notorious inefficiencies and moral mazes.


** Not totally fair to Eddington, since he was a pretty strong theorist himself.

Comment by technicalities on What is the point of College? Specifically is it worth investing time to gain knowledge? · 2020-03-24T08:27:14.269Z · LW · GW

Caplan puts the signalling share of the college income premium at 50%-80%, leaving (say) 20% for the human capital share. So your sentence calling HC "mostly irrelevant" is technically true, but I wouldn't use the word 'irrelevant' for a feature explaining ~ a fifth of the variance.

Comment by technicalities on What are the risks of having your genome publicly available? · 2020-02-12T10:09:30.294Z · LW · GW

Ooh I know this one

https://www.gleech.org/genes-out/

  • Health insurance
  • Adversarial dirt
  • Increased police attention, false positives
  • DNA framing
  • Releases info about my family members
  • Probabilistic homophobia (etc)
  • Mate choice

Plus a few I wouldn't worry about even if I lived 500 years (signature bioweapons, clones)

Comment by technicalities on Have epistemic conditions always been this bad? · 2020-01-25T09:12:33.434Z · LW · GW

The first thing that comes to mind is that there was more campus violence in the past (1960s-70s). e.g. Paris in May '68, the Zenkyoto riots, Students for a Democratic Society, internal Black Power murders, and so on.

When, at the 1966 SDS convention, women called for debate they were showered with abuse, pelted with tomatoes.

(Though one of the most notable student movements, the Free Speech Movement in Berkeley, was actually about lifting institutional restrictions on discussion specifically Vietnam War protest.)

I don't have data, but this fear was maybe a stronger chilling effect than of being called names and disapproved of. Ideas for operationalising the culture:

  • How many admin restrictions on acceptable speech? How many expulsions for speech?
  • How many protests at lectures? How many successful no-platforms?
  • How many students left college after cancelling?
  • some measure of polarisation, of people self-sorting into their tribe's college.
Comment by technicalities on Epistea Summer Experiment · 2019-05-15T07:28:03.256Z · LW · GW

Will this clash with the Human-Aligned AI event?

Comment by technicalities on Automatic for the people · 2018-07-11T20:48:10.131Z · LW · GW

1 and 2. These are what I mean by capital distribution:

Prop up the liberal mixed economy: with a programme of mass employee stock ownership (mostly your 1); or by carving each full-time job into several part-time ones, plus heavy wage subsidies (your 2) (...); or get the government to buy every 18 year old a serious stock portfolio (your 2)

Basic income is also political in your sense (2), since it's a large government-driven change from the status quo (and, unless we wait for >> 50 years of growth before we implement it, it will likely involve tax hikes *).

One reason to favour building up private stocks (over UBI, a flow) is that this protects people (a bit) from later nativist or populist governments cutting off parts of the population. It still goes through politicians, but only once rather than annually. Not sure what protects against revolutionary appropriation (3) though.

(ESOPs are a relatively apolitical nudge, but I don't know how much of the problem they'd solve.)

\* World growth this year was 3.1%. Compounding that fifty times gets us $370tn GWP. It's hard to tell what the 'average world tax rate' is at present, but approximate it with public expenditure, around 25% of GWP. If we could only use 25% of $370tn, we'd be roughly where the above long extreme calculation put us: nowhere near enough.

Comment by technicalities on Automatic for the people · 2018-07-09T21:31:56.254Z · LW · GW

That's a fair inference; 'foolish' was an unhelpful thing to say.

(My actual reason for disdaining Macs is finding recently that my desired laptop - nothing crazy, i7, 32GB RAM, 1TB SSD - was unavailable for any price. And that a price-matched one had about 25% those specs. Is that ideological?)

Comment by technicalities on Automatic for the people · 2018-07-09T18:49:48.646Z · LW · GW

Sorry if I was unclear; I'm not endorsing that logical extreme of the UBI, and I'm also unnerved by many of the policies I describe. ("total control of production by any entity is a terrible unnecessary risk"). The point of the calculation is to show that automation (or a similarly giant productivity gain) is necessary for a good future. Or are you saying it's so implausible that it's not worth thinking about?

I think the best argument against basic income is that it transfers vastly more economic power to politicians. That's what makes me take capital distribution seriously: put it right into people's hands, away from mob-shaped politicians and denial.

Comment by technicalities on Automatic for the people · 2018-07-09T18:31:24.743Z · LW · GW

Demographics: not sure!

1) Naively, population growth should delay automation by decreasing wages. Frey and Osborne don't account for this, let alone more realistic second-order effects (e.g. 'more people, more demand, thus feedback...'). But they don't commit to a real timeframe anyway.

2) Banally: "if economic growth matches or exceeds population growth, at least the downside will be bounded". But we're not going to get sensible macro' predictions for a century away, so that ends that thought.

Even conditional on Frey and Osborne's dramatic scenario, I doubt there will be a crisis (in the sense of sudden violent unrest), since automation progress isn't that fast (e.g. takes a given public company years, not days) and can often be stalled by regulation. Things like job sharing (n part-time workers instead of one full-time worker) could be stepped-up very gradually too. (Or whatever the minimal change to the system that just averts an explosion is.)

Comment by technicalities on Automatic for the people · 2018-07-09T18:01:29.733Z · LW · GW

Yes, the piece is conditional; the "47% of jobs in the next few decades" estimate, which spurred me to write this, is more or less naive top-down extrapolation.

But many of the same considerations apply if long-term labour trends continue:

Anyway other powerful forces (e.g. global outsourcing, the decay of unions) besides robots have led to the 40-year decline in labour’s share of global income. But those will produce similar dystopian problems if the trend continues, and there’s enough of a risk of the above scenario for us to put a lot of thought and effort into protecting people, either way.
Comment by technicalities on Automatic for the people · 2018-07-09T17:33:08.483Z · LW · GW

Fixed, thank you.

Comment by technicalities on Automatic for the people · 2018-07-09T17:29:17.369Z · LW · GW

"80%" seems accurate about the UK's media; see this (2007) study which puts original reporting at only 19% of all stories:

In short, fewer than one in five press articles (19%) appear to be based mainly on information that does not come from pre-packaged sources. Indeed, 60% of press stories rely wholly or mainly on pre-packaged information, and only 12% are entirely independent of such material

I've linked that and qualified the claim as about "[UK] journalism" anyway; thanks.

The slow uptake of auto-journalism is evidence that it isn't really ready, sure. How Efficient do you take old media to be?

Comment by technicalities on Meetup : Glasgow (Scotland) Meetup · 2014-11-15T23:06:34.234Z · LW · GW

Hi,

Just saw this, but will sadly miss it. Would be very interested in future meetups.