Posts

Suspiciously balanced evidence 2020-02-12T17:04:20.516Z · score: 39 (14 votes)
"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z · score: 3 (4 votes)
Buying happiness 2016-06-16T17:08:53.802Z · score: 37 (40 votes)
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z · score: 19 (19 votes)
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z · score: 3 (4 votes)
[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z · score: 15 (15 votes)
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z · score: 6 (7 votes)
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z · score: 4 (5 votes)
Open thread, September 15-21, 2014 2014-09-15T12:24:53.165Z · score: 6 (7 votes)
Proportional Giving 2014-03-02T21:09:07.597Z · score: 6 (14 votes)
A few remarks about mass-downvoting 2014-02-13T17:06:43.216Z · score: 27 (44 votes)
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z · score: 17 (20 votes)
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z · score: 19 (20 votes)
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z · score: 7 (11 votes)
General textbook comparison thread 2011-08-26T13:27:35.095Z · score: 9 (10 votes)
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z · score: 5 (7 votes)
The uniquely awful example of theism 2009-04-10T00:30:08.149Z · score: 38 (48 votes)
Voting etiquette 2009-04-05T14:28:31.031Z · score: 10 (16 votes)
Open Thread: April 2009 2009-04-03T13:57:49.099Z · score: 5 (6 votes)

Comments

Comment by gjm on REVISED: A drowning child is hard to find · 2020-03-22T22:54:36.676Z · score: 5 (3 votes) · LW · GW

Your claim, as I understood it -- which maybe I didn't, because you have been frustratingly vague about your own argument at the same time as demanding ever-increasing amounts of detail from anyone who questions it -- was that if the $5k-per-life-equivalent figure were real then there "should" be some experiment that could be done "in a well-defined area like Madagascar" that would be convincing enough to be a good use of the (large) resources it would cost.

I suggest that the scenario I described above is obviously consistent with a $5k-per-life-equivalent figure in the places where bednets are most effective per unit spent. I assume you picked Madagascar because (being isolated, fairly small, etc.) it would be a good place for an experiment.

If you think it is not credible that any global picture makes the $5k figure "true and meaningful" then it is up to you to give a good argument for that. So far, it seems to me that you have not done so; you have asserted that if it were true then EA organizations should be running large-scale experiments to prove it, but you haven't offered any credible calculations or anything to show that if the $5k figure were right then doing such experiments would be a good use of the available resources, and my back-of-envelope calculations above suggest that in the specific place you proposed, namely Madagascar, they quite likely wouldn't be.

Perhaps I'm wrong. I often am. But I think you need to provide more than handwaving here. Show us your detailed models and calculations that demonstrate that if the $5k figure is anywhere near right then EA organizations should be acting very differently from how they actually are acting. Stop making grand claims and then demanding that other people do the hard work of giving quantitative evidence that you're wrong, when you yourself haven't done the hard work of giving quantitative evidence that you're right.

Once again I say: what you are doing here is not what arguing in good faith usually looks like.

Comment by gjm on At what point does disease spread stop being well-modeled by an exponential function? · 2020-03-09T00:17:45.354Z · score: 4 (2 votes) · LW · GW

Typo alert: you've written "tags: coronavirsus" which has an extra "s" in "coronavirus".

Comment by gjm on Matrix Multiplication · 2020-03-05T12:54:28.583Z · score: 3 (2 votes) · LW · GW

Matrix multiplication means multiplying matrices.

A vector can be viewed as a particular sort of a matrix, with one dimension equal to 1. So matrix-vector multiplications are a special case of matrix-matrix multiplications.

A tensor is a possibly-higher-dimensional generalization of a matrix. A scalar is a rank-0 tensor, a vector is a rank-1 tensor, a matrix is a rank-2 tensor, and then there are higher ranks as well.

In actual mathematics, vectors and tensors are not mere arrays of numbers; they are objects that live in "vector spaces" or "tensor products of vector spaces", and the numbers are their coordinates; you can change coordinate system and the numbers will change in certain well-defined ways. But when e.g. Nvidia sell you a GPU with "tensor cores" they just mean something that can do certain kinds of matrix arithmetic quickly.

In e.g. one version of Google's TPUs, there's a big systolic array of multiply-accumulate units, which is good for dot-product-like operations, and you program it with instructions that do things like an Nx256-by-256x256 matrix multiplication, for whatever value of N you choose. If you need to handle arrays of different sizes, you'd build the calculations out of those units.

Comment by gjm on The Apologist and the Revolutionary · 2020-03-05T12:44:47.675Z · score: 6 (3 votes) · LW · GW

A thing I am horrified not to have thought of when I first read this, or at any time in the ~11 years since (and, looking through the comments, it doesn't seem like anyone else did, which is also a bit horrifying):

If reality matches fairly closely with Ramachandran's metaphor and there's an actual brain subsystem localized somewhere in the left hemipshere that acts as "apologist" and another actual brain subsystem localized somewhere in the right hemisphere that acts as "revolutionary", we should expect left-hemisphere damage sometimes to have a sort of anti-anosognosic effect by suppressing the "apologist". Since this sort of apologism is a thing most of us do all the time about everything, anapologetic syndrome should have clearly discernible effects: the patient would lose the ability to confabulate nice explanations for not-so-nice things, in a way that ought to be noticeable since if we didn't need that ability to function effectively in society it seems like it'd be evolutionarily advantageous to lose it.

This might show up to some extent as depression, which Scott mentions is not uncommon in victims of left-hemisphere brain damage, but it seems much more specific.

You might think: no, this won't happen, because apologism is just how the brain works; so you have a revolutionary-module and the whole rest of your brain is the apologist. Or you might think: no, this won't happen, because the relevant module isn't really an "apologist" but an "explainer" that happens to work in a positively-biased way, so if that module went offline then you'd just completely lost the ability to make sense of the world. BUT both of these seem hard to square with the cold-water trick, which sure does seem as if it's briefly disabling or shaking up a localized apologism module. Maaaaaybe the apologist is the whole left hemisphere, and damaging bits of it doesn't do much, but cold-water-squirting somehow changes the state of the whole hemisphere?

(Could it instead be briefly waking up the damaged revolutionary module? No, because it's right-hemisphere damage that causes anosognosia. The damaged bit is not in the same part of the brain as you're squirting cold water near to.)

I notice that I am confused. Anyone got good suggestions?

Comment by gjm on Is there a better way to define groups for COVID-19 impact? · 2020-03-04T23:05:44.648Z · score: 2 (1 votes) · LW · GW

That's exactly what I meant by "we hear about increased risk if ...". Those figures don't do much to distinguish between e.g. "these specific conditions make it more likely to be bad, and if you're old but don't have them then you're fine" and "age makes it more likely to be bad, and if you're young but have those conditions then you're fine".

Do they do anything to distinguish those possibilities? Probably. How much depends on how strongly those various conditions correlate with age. But my feeling is that e.g. cardiovascular disease is a better indication of being old than hypertension or diabetes, which I think are more likely to crop up in middle age, so the percentages feel fairly compatible with the it's-just-age hypothesis. If it were 6% for CVD and 10% for hypertension instead then I'd be more confident that there's something specifically bad about hypertension that makes COVID-19 worse.

(If I have to guess, I guess that the answer is somewhere in the middle: almost any specific health issue makes something like COVID-19 more likely to kill you, including the specific ones listed there but others too, and being older is bad because of all the ways you're likely to be less healthy if older. But who knows?)

Comment by gjm on Is there a better way to define groups for COVID-19 impact? · 2020-03-04T15:04:49.348Z · score: 6 (3 votes) · LW · GW

A related thing I wonder about: as well as the variation of risk with age, we hear about increased risk if you have various other conditions -- diabetes, hypertension, etc. Many of these of course are also things that tend to appear and/or worsen with age, and it's not clear to me how the various numbers should be interpreted if you want to estimate the risk to someone with known age and known other conditions (or absence thereof).

Comment by gjm on How does electricity work literally? · 2020-02-28T14:37:11.793Z · score: 3 (2 votes) · LW · GW

It's not a big amount. (For, e.g., a typical mains cable.) And cabling, especially if the currents flowing in it are at high frequencies (which means more radiation), is often designed to reduce that radiation. That's one reason why we have coaxial cables and twisted pairs. For a 50Hz or 60Hz power cable, though, the radiative losses are tiny.

You can power devices wirelessly -- using "those cordless chargers". They are designed to maximize the extent to which these effects happen, and of course the devices need to be designed to work that way. Ordinary mains cables don't radiate a lot and it isn't practical to power anything nontrivial by putting it near a mains cable.

But the most effective way of getting energy from the field around a pair of wires is ... to connect the wires into an electric circuit. Indeed, it's only when they're connected in such a circuit that the current will flow through the wires and the energy will flow around them.

Comment by gjm on How does electricity work literally? · 2020-02-27T13:49:24.676Z · score: 3 (2 votes) · LW · GW

Yes, water and electricity are different in important ways even though the analogy is informative sometimes.

The energy in the electromagnetic field really truly is different from the kinetic energy of the electrons. (This is one of the important differences from water in a pipe, in fact.)

You can see this fairly easily in a "static" case: if I use electricity to charge up a big capacitor, I've stored lots of energy in the capacitor but it's potential not kinetic energy. (There's a lot of potential energy there because there's extra positive charge in one place and extra negative charge in another, and energy will be released if they are allowed to move together so that the net charge everywhere becomes approximately zero.)

You might want to describe this situation by saying that the electrons involved have a certain amount of potential energy, just as you might say that when you lift a heavy object from the surface of the earth that object has acquired (gravitational) potential energy. That point of view works fine for this sort of static situation, but once your charges start moving around it turns out to be more insightful to think of the energy as located in the electromagnetic fields rather than in the particles that interact with those fields.

So, for instance, suppose you arrange for an alternating current to flow in a conductor. Then it will radiate, transmitting energy outward in the form of electromagnetic waves. (Radio waves, at the sort of frequencies you can readily generate in a wire. At much higher frequencies you get e.g. light waves, but you typically need different hardware for that.) This energy is carried by the electromagnetic field. It will propagate just fine in a vacuum: no need for any charged particles in between.

When you have an actual electrical circuit, things are more complicated, but it turns out that the electrical energy is not flowing through the wires, it's flowing through the field around the wires. And, again, this energy is not the kinetic energy of the electrons.

Comment by gjm on How does electricity work literally? · 2020-02-25T15:14:31.112Z · score: 3 (2 votes) · LW · GW

The water (or, rather, the electricity) sloshes to and fro 50 times a second, so there's never enough delay between flicking the switch and getting usable power that a human being would notice. Typically other things are slower; e.g., if you're turning on an incandescent lightbulb then it may take longer than that for the filament to get hot enough that it starts glowing. For many devices (e.g., your phone) there is a converter device, and when you attach your phone to its USB wall-plug it's getting DC electricity from it.

It would be possible to have some sort of converter for every household, but every such converter has some losses, and many devices are perfectly happy just running off AC, and ones that aren't don't necessarily all want the same operating voltage. Again, if we were doing everything from scratch now it might be worth considering something like that (or it might not; the details matter and I'm not an electrical engineer myself), but we have a basically-working system and replacing it wholesale with something new would need to be a big improvement to be worth the tremendous cost and inconvenience.

It would be more accurate to say that devices use the energy in the electromagnetic field rather than the kinetic energy of electrons, as such. (There isn't a clear distinction between using the electric field and using the magnetic field; the two are very intimately linked and, e.g., if two observers are moving rapidly relative to one another, then what one sees as the electric field the other may see as the magnetic field.)

The motor in an electric fan works something like this. (Unfortunately it involves effects that don't have a close analogue in terms of flowing water.) There are coils of wire. You pass an alternating current through these coils; changing currents generate a magnetic field. (This isn't meant to be obvious. It was one of the big discoveries of 19th-century physics.) There's a lump of iron placed so that this magnetic field pulls on it. A bit of engineering ingenuity lets you arrange these elements so that the effect is to make a shaft keep turning in a consistent direction. You mount your fan blades on that shaft. (Don't take my description too literally. An actual design might e.g. have the wires on the shaft and the big lumps of iron on the outside, not moving.) In terms of individual electrons: a moving electron produces a magnetic field that "curls around" its path; a whole lot of electrons moving along a conductor produce a magnetic field that curls around those conductors; if you wind that conductor into a coil, you get a magnetic field running along the length of the coil.

The details of how energy flows from place to place in all this are subtle and I will probably get them wrong if I try to go into details. As an example: suppose you supply electricity to some system by means of a pair of parallel wires with opposite currents flowing in them; then the energy flow in the system happens outside the wires, not inside them. (It happens near to the wires, and the energy flows parallel to the wires.)

(Just to reiterate: this isn't a matter of electrons flowing into the device and being consumed, just as a hydraulically powered system that works by having water turn turbine blades doesn't work by consuming the water.)

I think most of the power consumption in (the processing parts of) a computer is resistive losses -- i.e., the thing where energy from the electric field gets transferred to kinetic energy in the electrons and/or atoms and heats things up. In an idealized maximally-efficient computing device, it turns out that the one thing that unavoidably costs energy is disposing of information, and some people have speculated about "reversible computing" that never erases bits or otherwise throws information away; but real computing devices are several orders of magnitude away from being limited by these considerations.

I believe a fridge uses electrical energy mostly in motors, which work in much the same way as the motor in a fan. These motors then drive other interesting systems that e.g. compress fluids and pump them around and so forth -- I don't know any of the details offhand -- but electricity is not directly involved in those mechanisms.

As I hope I've already made clear, I'm not really an expert on this, and quite possibly no other LW regulars are either. You might do better to find e.g. a textbook on electromagnetism. (But be warned: if you read a textbook on electromagnetism that goes deep enough to answer your questions, you will end up having to do quite a lot of maths.)

Comment by gjm on How does electricity work literally? · 2020-02-24T16:13:45.401Z · score: 17 (9 votes) · LW · GW

The speed at which electrical signals propagate is much faster than the speed at which electrons move in an electrical conductor. (Possibly helpful metaphor: suppose I take a broomstick and poke you with it. You feel the poke very soon after I start shoving the stick, even though the stick is moving slowly. You don't need to wait until the very same bit of wood I shoved reaches your body.)

The speed at which electrical signals propagate is slower than the speed of light, but it's a substantial fraction of the speed of light and it doesn't depend on the speed at which the electrons move. (It may correlate with it -- e.g., both may be a consequence of how the electrons interact with the atoms in the conductor. Understanding this right is one of the quantum-mechanical subtleties I mention below.)

When current flows through a conductor with some resistance, some of the energy in the flow of the electrons gets turned into random-ish motion in the material, i.e., heat. This will indeed make the electrons move more slowly but (see above) this doesn't make much difference to the speed at which electrical effects propagate through the conductor.

(What actually happens in electrical conductors is more complicated than individual electrons moving around, and understanding it well involves quantum-mechanical subtleties, of most of which I know nothing to speak of.)

It is not usual to convert AC to DC using relays.

It is true that if you take AC power, rectify it using the simplest possible circuit, and use that to supply a DC device then it will alternate between being powered and not being powered -- and also that during the "powered" periods the voltage it gets will vary. Some devices can work fine that way, some not so fine.

In practice, AC-to-DC conversion doesn't use the simplest possible circuit. It's possible to smooth things out a lot so that the device being powered gets something close to a constant DC supply.

But there are similar effects even when no rectification is being done. You mentioned flickering lights, and until recently they were an example of this. If you power an incandescent bulb using AC at 50Hz then the amount of current flowing in it varies and accordingly so does the light output. (At 100Hz, not 50Hz; figuring out why is left as an exercise for the reader.) However, because it takes time for the filament to heat up and cool down the actual fluctuation in light output is small. Fluorescent bulbs respond much faster and do flicker, and some people find their light very unpleasant for exactly that reason. LED lights, increasingly often used where incandescents and fluorescents used to be, are DC devices. I think there's a wide variety in the circuitry used to power them, but most will flicker at some rate. Good ones will be driven in such a way that they flicker so fast you will never notice it. (Somewhere in the kHz range.)

Sometimes DC (at high voltages) is used for power transmission. I think AC is used, where it is used, because conversion between (typically very high) transmission voltage and the much lower voltages convenient for actual use is easy by means of transformers; transformers only work for AC. (Because they depend on electromagnetic induction, which works on the principle that changes in current produce magnetic fields and changes in magnetic field produce currents.) I don't know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I'm pretty sure all the obvious arguments on both sides were aired right from the start.

When a device "consumes" electrical energy it isn't absorbing electrons. (In that case it would have to accumulate a large electrical charge. That's usually a Bad Thing.) It's absorbing (or using in some other way) energy carried in the electric field. It might help to imagine a system that transmits energy hydraulically instead, with every household equipped with high-pressure pipes, with a constant flow of water maintained by the water-power company, and operating its equipment using turbines. These wouldn't consume water unless there were a leak; instead they would take in fast-moving water and return slower-moving water to the system. An "AC" hydraulic system would have water moving to and fro in the pipes; again, the water wouldn't be consumed, but energy would be transferred from the water-pipes to the devices being operated. Powering things with electricity is similar.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-18T15:45:13.042Z · score: 4 (2 votes) · LW · GW

I'm pointing out what seem to me to be large and important holes in your argument.

To an objection of the form "You have given no good reason to think Y follows from X", it is not reasonable to respond with "You need to give a specific example of how you can have X and not Y, with realistic numbers in it".

I claim that you have given no reason to think that if there's a lot of good to be done at $5k per life-equivalent then there is necessarily an experiment that it's feasible for (say) GiveWell to conduct that would do something like eliminating all malaria deaths in Madagascar for a year. You've just said that obviously there must be.

I reject any norms that say that in that situation anyone saying that your reasoning has gaps in is obliged to show concrete counterexamples.

However, because I'm an obliging sort of chap, let's have a go at constructing one and see what happens. (But, for the avoidance of doubt, I am not conceding that if my specific counterexample turns out not to work then it means your claim is right and mine is wrong. Of course it's possible that you know ahead of time that I can't construct a working counterexample, on account of having a better understanding than mine of the situation -- but, again, in that case communicating that better understanding should be part of your argument.) I'll look at Madagascar since that's the country you mentioned specifically.

[EDITED to add:] Although the foregoing paragraph talks about "constructing a counterexample", in fact what I did in the following paragraphs is just to make some guesses about numbers and see where they lead; I wasn't trying to pick numbers that are maximally persuasive or anything.

So, first of all let's find some numbers. Madagascar has a population of about 26 million. Malaria is the 7th most common cause of death there. If I'm reading the stats correctly, about 10% of the population has malaria and they have about 6k deaths per year. Essentially the entire population is considered at risk. At present Madagascar gets about $50M/year of malaria-fighting from the rest of the world. Insecticide-treated bed nets allegedly reduce the risk of getting malaria by ~70% compared with not having them; it's not clear to me how that's defined, but let's suppose it's per year. The statistics I've seen differ somewhat in their estimates of what fraction of the Madagascan population has access to bed nets; e.g., in this document from the WHO plot E on page 85 seems to show only ~5% of the population with access to either bed nets or indoor spraying; the table on page 117 says 6%; but then another table on page 122 estimates ~80% of households have at least one net and ~44% have at least one per two people. I guess maybe most Madagascan households have a great many people? These figures are much lower in Madagascar than in most of Africa; I don't know why. It seems reasonable to guess that bed net charities expect it to be more expensive, more difficult or less effective in Madagascar than in the other places where they have distributed more nets, but again even if this is correct I don't know what the underlying reasons are. I observe that several African countries have a lot more malaria deaths per unit population; e.g., Niger has slightly fewer people than Madagascar but nearly 3x as many malaria deaths. (And also about 3x as many people with malaria.) So maybe bed net distribution focuses on those countries?

So, my first observation is that this is all consistent with the possbility that the number of lives saveable in Madagascar at ~$5k/life is zero, because of some combination of { lower prevalence of malaria, higher cost of distributing nets, lower effectiveness of nets } there compared with, say, Niger or the DRC. This seems like the simplest explanation of the fact that Madagascar has surprisingly few bed nets per person, and it seems consistent with the fact that, while it certainly has a severe malaria problem, it has substantially less malaria per person than many other African countries. Let's make a handwavy guess that the effectiveness per dollar of bednets in Madagascar is half what it is in the countries with the best effectiveness-per-dollar opportunities, which conditional on that $5k/life-equivalent figure would mean $10k/life-equivalent.

Now, as to fatality: evidently the huge majority of people with malaria do not die in any given year. (~2.5M cases, ~6k deaths.) Malaria is a serious disease even when it doesn't kill you. Back of envelope: suppose deaths from malaria in Madagascar cost 40 QALYs each (life expectancy in Madagascar is ~66y, many malaria deaths are of young children but not all, there's a lot of other disease in Madagascar and I guess quality of life is often poor, handwave handwave; 40 QALYs seems like the right ballpark) and suppose having malaria but not dying costs 0.05 QALYs per year (it puts you completely out of action some of the time, makes you feel ill a lot more of the time, causes mental distress, sometimes does lasting organ damage, etc.; again I'm making handwavy estimates). Then every year Madagascar loses ~125k QALYs to nonfatal malaria and ~240k QALYs to fatal malaria. Those numbers are super-inexact and all I'm really comfortable concluding here is that the two are comparable. I guess (though I don't know) that bednets are somewhere around equally effective in keeping adults and children from getting malaria, and that there isn't any correlation between preventability-by-bednet and severity in any particular case; so I expect the benefits of bednets in death-reduction and other-illness-reduction to, again, be comparable. I believe death, when it occurs, is commonly soon after infection, but the other effects commonly persist for a long time. I'm going to guess that 3/4 of the effects of a change in bednet use happen within ~ a year, with a long tail for the rest.

So, let's put that together a bit. Most of the population is not currently protected by bednets. If they suddenly were then we might expect a ~70% reduction in new malaria cases that year, for those protected by the nets. Best case, that might mean a ~70% reduction in malaria deaths that year; presumably the actual figure is a bit less because some malaria deaths happen longer after infection. Call it 60%. Reduction in malaria harm that year would be more like 50%. Cost would be $10k per life-equivalent saved. Total cost somewhere on the order of $50M, a substantial fraction of e.g. AMF's total assets.

Another way to estimate the cost: GiveWell estimates that AMF's bednet distribution costs somewhere around $4.50 per net. So one net per person in Madagascar is $100M or so.

But that's only ~60% of the deaths; you wanted a nice clear-cut experiment that got rid of all the malaria deaths in Madagascar for one year. And indeed cutting deaths by 60% would not necessarily be conclusive, because the annual variation in malaria cases in Madagascar seems to be large and so is the uncertainty in counting those cases. In the 2010-2017 period the point estimates in the document I linked above have been as low as ~2200 and as high as ~7300; the error bars each year go from just barely above zero to nearly twice the point estimate. (These uncertainties are much larger, incidentally, than in many other African countries with similar malaria rates, which seems consistent with there being something about Madagascar that makes treatment and/or measurement harder than other African countries.)

To get rid of all (or nearly all) the deaths in one year, presumably you need to eliminate infection that happens while people aren't sleeping under their bed nets, and to deal with whatever minority of people are unwilling or unable to use bed nets. Those seem like harder problems. I think countries that have eliminated malaria have done it by eliminating the mosquitoes that spread it, which is a great long-term solution if you can do it but much harder than distributing bed nets. So my best guess is that if you want to get rid of all the malaria, even for one year, you will have to spend an awful lot more per life-equivalent saved that year; I would be unsurprised by 10x as much, not that surprised by 100x, and not altogether astonished if it turned out that no one actually knows how to do it for any amount of money. It might still be worth it if the costs are large -- the future effects are large if you can eliminate malaria from a place permanently. (Which might be easier in Madagascar than in many other African countries, since it's an island.) But it puts the costs out of the range of "things existing EA charities could easily do to prove a point". And it's a Gates Foundation sort of project, not an AMF one, and indeed as I understand it the Gates Foundation is putting a lot of money into investigating ways to eliminate malaria.

Tentative conclusion: It's not a all obvious to me that this sort of experiment would be worth while. For "only" an amount of money comparable to the total assets of the Against Malaria Foundation, it looks like it might be possible to somewhat-more-than-halve malaria deaths in Madagascar for one year (and reduce ongoing malaria a bit in subsequent years). The expected benefits of doing this would be substantially less than those of distributing bed nets in the probably-more-cost-effective other places where organizations like AMF are currently putting them. Given how variable the prevalence of malaria is in Madagascar, and how uncertain the available estimates of that prevalence seem to be, it is not clear that doing this would be anything like conclusive evidence that bednet distribution is as effective as it's claimed to be. (All of the foregoing is conditional on the assumption that it is as effective as claimed.) To get such conclusive evidence, it would be necessary to do things radically different from, and probably far more expensive than, bednet distribution; organizations like AMF would have neither the expertise nor the resources to do that.

I am not very confident about any of the numbers above (other than "easy" ones like the population of Madagascar), and all my calculations are handwavy estimates (because there's little point doing anything more careful when the underlying numbers are so doubtful). But what those calculations suggest to me is that, whether or not doing the sort of experiment you propose would be a good idea, it doesn't seem to be an obviously good idea (since, in particular, my current best estimate is that it would not be a good idea). Therefore, unless I am shown compelling evidence pointing in a different direction, I cannot take seriously the claim that EA organizations that aren't doing such experiments show thereby that they don't believe that there is large scope for doing good at a price on the order of $5k per life-equivalent.

Comment by gjm on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T19:44:47.902Z · score: 2 (1 votes) · LW · GW

I can well believe that universities used to work well and worsened over time. The point of my question at the end there is that I would expect any New Improved University Replacement to suffer the same process.

(Of course it might be worth it anyway, if it works better for long enough.)

Comment by gjm on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:11:17.722Z · score: 2 (1 votes) · LW · GW

I agree that you can make a case that sending a lot of people to university is wasteful; maybe you can make a case that sending anyone to university is wasteful (though, for what it's worth, that feels entirely wrong to me). But shminux was making a different claim: that our universities are so wasteful that they imperil our civilization's survival. That claim seems absurdly overblown to me.

Yes, the age at which people go to university is a good age for learning new things. That would be why people of that age are often encouraged to go off to a place designed for learning new things: a university.

Maybe just getting a job will (on average) actually result in learning more valuable things, but frankly I don't see any reason to believe that. (More things valuable for becoming a cog in someone else's industrial machine, maybe, though even that isn't obvious.)

Maybe all young people (or at least all fairly bright young people?) should be trying to start their own businesses, but again I see no reason to believe that either. Starting a business is hard; most new businesses fail; most 18-year-olds lack knowledge and experience that would greatly improve their chances of starting a successful business. (There are other reasons why I think this would be a bad idea, but since I'm not even sure it's what you have in mind I'll leave it there.)

Maybe the learning people currently do at universities, or the learning they're meant to be doing at universities, or whatever other learning should replace it, should be done "in the background" while they are working a job; but I see no reason to think that's even possible in most cases. Their jobs are likely to be too demanding in time, effort and mental focus. For sure some people can do it, but if you want it to be the general case then I'd like to see evidence that it's feasible.

Maybe we need different ways of optimizing 18-20-year-olds' lives for learning new and valuable things. I'd be interested to see concrete proposals. An obvious question I hope they'd address: why expect that in practice this will end up better than universities?

Comment by gjm on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T01:26:39.228Z · score: 4 (2 votes) · LW · GW

So the usual story about Easter Island is that construction and transportation of the statues used up all their palm trees, and the island's ecosystem depended on the palm trees, so they starved to death or something. (I think there's some doubt about whether that's actually right, but it's a plausible enough story and a useful analogy even if it turns out not to be literally true.)

What resource are universities in danger of consuming all of?

Money? US spending on universities seems to be on the order of a couple of percent of GDP.

Researcher time? Even supposing that university research is worthless, there's a lot of research being done by corporate R&D departments, and OP here gives some examples to suggest that some of it's pretty good. (And, for what it's worth, I don't find it plausible that university research is worthless, though no doubt some of it is.)

Students' youth? Even supposing that time spent at university is worthless, it's only a few years per person. (And, for what it's worth, I don't find it plausible that time spent at university is worthless. I know that at university I both had fun and learned things I am still glad to know; maybe I was exceptionally lucky but I don't know of any good reason to think so.)

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-13T16:22:56.608Z · score: 2 (1 votes) · LW · GW

So I assume you're objecting to his statement near the end that "the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)", on the basis that he should actually say "you probably can't really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons".

But I don't see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell's actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn't involve intertemporal funging of a sort that messes up incentives in the way you say it does).

Where do you think Scott's comment assumes the "steep diminishing returns story"?

It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there's a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague "you" we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-13T16:08:01.155Z · score: 2 (3 votes) · LW · GW

It's a bit like the difference between "Ben thinks Gareth is giving too much money to the Against Malaria Foundation" and "Ben thinks Gareth isn't letting enough babies die of malaria", in the context of a discussion about how individuals should allocate their money.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-13T16:02:52.240Z · score: 7 (5 votes) · LW · GW

I think you have the burden of proof in the wrong place. You are claiming that if there's a lot of good to be done at $5k then there must be experiments that are obviously worth pouring a lot of resources into. I'm simply saying that that's far from clear, for the reasons I gave. If it turns out that actually further details of the situation are such as to mean that there must be good experiments to do, then your argument needs to appeal to those further details and explain how they lead to that conclusion.

I am not making any specific claim about what fraction of malaria deaths are from infection in prior years, or what proportion can be prevented at ~$5k per life-equivalent, etc. To whatever extent those are relevant to the correctness of your claim that EA organizations would be running the sort of experiments you propose if they really believed their numbers, your argument for that claim should already be in terms of those figures.

Comment by gjm on Suspiciously balanced evidence · 2020-02-13T01:24:22.169Z · score: 2 (1 votes) · LW · GW

I'm seeing a lot of replies saying, in effect, "duh, obviously #1: we only bother thinking about the questions whose answers are still somewhat open to doubt". Maybe that's the whole story, but I still don't think so: some questions remain in the category of "questions many people think about and aren't close to certain of the answers to" even as a lot of evidence accumulates.

(But maybe #1 is more of the story than I was inclined to think.)

Comment by gjm on Suspiciously balanced evidence · 2020-02-12T19:46:33.502Z · score: 2 (1 votes) · LW · GW

Can you suggest a better way of framing the question?

(I'm not very sure what sort of adding-up-to-normality you have in mind; are you saying my "good explanation #1" is likely the correct one?)

Comment by gjm on Causal Universes · 2020-02-11T12:08:18.736Z · score: 2 (1 votes) · LW · GW

I think probably Penrose's "The Road to Reality" was intended. I don't think there's anything in the Deutsch book like "curvature of spacetime is determined by infinitesimal light cones"; I don't think I've read the relevant bits of the Penrose but it seems like exactly the sort of thing that would be in it.

Comment by gjm on Open & Welcome Thread - February 2020 · 2020-02-10T17:09:14.624Z · score: 2 (1 votes) · LW · GW

Nice to see that Steven Pinker has the same N-blindness as Scott himself :-).

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-10T03:07:51.038Z · score: 7 (5 votes) · LW · GW

Could you give an example or two? I don't mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns -- obviously different people may have different opinions -- but of a single person doing the sort of combination you describe.

The actual article doesn't, so far as I can see, at all focus on any such cases; it doesn't say "look, here are some bogus arguments people make that assume two different incompatible things"; rather, it says "EA organizations say you should give money to EA causes because that way you can do a lot of good per unit money, but they are lying to you and you should do other things with your money instead". (Not an actual quotation, of course, but I think a fair paraphrase.)

So I don't understand how your defence here makes any sense as a defence of the actual article.

A couple of other points, while I have your attention.

----

The article says this:

My former employer GiveWell in particular stands out, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that Good Ventures would be saving more than their "fair share" of lives.

All credit to you, once again, for linking to what GiveWell actually wrote. But ... it seems to me that, while indeed they did use the words "fair share", your description of their reasons doesn't at all match what they say. Let me quote from it:

Over the past couple of weeks, we’ve had many internal discussions about how to reconcile the goals of (a) recommending as much giving as possible from Good Ventures to top charities, which we consider outstanding giving opportunities; (b) preserving long-run incentives for individuals to support these charities as well. The proposals that have come up mostly fit into one of three broad categories:

... and then the three categories are "funging", "matching", and "splitting", and it's in explaining what they mean by "splitting" that they use the words "fair share". But the goal here, as they say it, is not at all to have everyone save a "fair share" of lives. They give some reasons for favouring "splitting" (tentatively and corrigibly) and those reasons have nothing to do with "fair shares". Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between "we favour an approach that can be summarized as 'donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much'" and "it would be bad if anyone saved more than their fair share of lives".

Could you explain why you chose to describe GiveWell's position by saying 'they were worried that Good Ventures would be saving more than their "fair share" of lives'? Do you actually think that is an accurate description of GiveWell's position?

----

A key step in your argument -- though it seems like it's simply taken the place of other entirely different key steps, with the exact same conclusion allegedly following from it, which as I mentioned above seems rather fishy -- goes like this. "If one could do a great deal of good as efficiently as the numbers commonly thrown about imply, then it would be possible to run an experiment that would verify the effectiveness of the interventions, by e.g. completely eliminating malaria in one country. No one is running such an experiment, which shows that they really know those numbers aren't real. On the other hand, if there's only a smallish amount of such good to be done that efficiently, then EA organizations should be spending all their money on doing it, instead of whatever else they're doing. But they aren't, which again shows that they really know those numbers aren't real. Either way, what they say is dishonest PR and you should do something else with your money."

It looks to me as if basically every step in this argument is wrong. Maybe this is because I'm misunderstanding what you're saying, or failing to see how the logic works. Let me lay out the things that look wrong to me; perhaps you can clarify.

  • The "great deal of good" branch: running experiments.
    • It doesn't at all follow from "there is an enormous amount of good to be done at a rate of $5k per life-equivalent" that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
    • You might want (genuinely, or for rhetorical purposes, or both) EA charities' money to be spent on running nice conclusive experiments, but that is no guarantee that that's actually the most effective thing for them to be doing.
    • Still less is it a guarantee that they will see that it is. (It could be that running such an experiment is the best thing they could do because it would convince lots of people and open the floodgates for lots of donations, but that for one reason or another they don't realise this.) So even if (1) there are nice conclusive experiments they could run and (2) that would actually be the best use of their money, that's not enough to get from "they aren't running the experiments" to "they know the results would be bad" or anything like that. They might just have an inaccurate model of what the consequences of the experiments would be. But, for the avoidance of doubt, I think #1 and #2 are both extremely doubtful too.
    • It's not perfectly clear to me who is supposed to be running these experiments. In order to get to your conclusion that EA organizations like GiveWell are dishonest, it needs to be those organizations that could run them but don't. But ... I don't think that's how it works? GiveWell doesn't have any expertise in running malaria-net experiments. An organization like AMF could maybe run them (but see above: most likely it would actually take lots of different organizations working together to get the sort of clear-cut answers you want) but it isn't AMF that's making the cost-per-life-equivalent claims you object to, and GiveWell doesn't have the power to force AMF to burn a large fraction of its resources on running an experiment that (for whatever reason) it doesn't see as the best use of those resources. (You mention the Gates Foundation as well, but they don't seem actually relevant here.)
  • The "smallish amount of good" branch: what follows?
    • If I understand your argument here correctly (which I may well not; for whatever reason, I find all your comments on this point hard to understand), you reckon that if there's (say) $100M worth of $5k-per-life-equivalent good to do, then GiveWell should just get Good Ventures to do it and move on.
    • As you know, they have given some reasons for not doing that (the reasons I think you mischaracterized in terms of 'saving more than their "fair share" of lives').
    • I think your position is: what they're doing is deliberately not saving lives in order to keep having an attractive $5k-per-life-equivalent figure to dangle in front of donors, which means that if you give $5k in the hope of doing one life-equivalent of good then you're likely actually just reducing the amount GiveWell will get Good Ventures to contribute by $5k, so even if the marginal cost really is $5k per life-equivalent then you aren't actually getting that life-equivalent because of GiveWell's policies. (I'm not at all sure I'm understanding you right on this point, though.)
      • Whether or not it's your position, I think it's a wrong position unless what GiveWell have said about this is outright lies. When discussing the "splitting" approach they end up preferring, they say this: 'But they [sc. incentives for individual donors] are neutral, provided that the “fair share” is chosen in a principled way rather than as a response to the projected behavior of the other funder.' (Emphasis mine.) And: 'we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches).'
    • Incidentally, they also say this: 'For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.' So for those "highest-value" cases, at least, they are doing exactly what you complain they are not doing.
    • A separate consideration: the most effective things for a large organization to fund may not be the same things that are most effective for individual donors to fund. E.g., there may be long-term research projects that only make sense if future support is guaranteed. I think the Gates Foundation does quite a bit of this sort of thing, which is another reason why I think you're wrong to bring them in as (implicitly) an example of an organization that obviously would be giving billions for malaria nets if they were really as effective as the likes of GiveWell say they are.
  • Suppose it turns out that the widely-touted figures for what it costs to do one life-equivalent of good are, in fact, somewhat too low. Maybe the right figure is $15k/life instead of $5k/life, or something like that. And suppose it turns out that GiveWell and similar organizations know this and are publicizing smaller numbers because they think it will produce more donations. Does it follow that we can't do a lot of good without a better and more detailed model of the relevant bit of the world than we can realistically obtain, and that we should all abandon EA and switch to "taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits"? I don't see that it does: to make EA a bad "investment" it seems to me that it has to be much wronger than you've given any reason to think it is likely to be. (Jeff K has said something similar in comments to the original article, but you didn't respond.)
Comment by gjm on What Money Cannot Buy · 2020-02-07T13:37:43.184Z · score: 11 (6 votes) · LW · GW

I'm pretty sure you're wrong about the xkcd example.

He doesn't just look at the number of characters in the four words. He reckons 11 bits of entropy per word and doesn't operate at the letter level at all. If those words were picked at random from a list of ~2000 words then the entropy estimate is correct.

I don't know where he actually got those words from. Maybe he just pulled them out of his head, in which case the effective entropy might be higher or lower. To get a bit of a handle on this, I found something on the internet that claims to be a list of the 2000ish most common English words (the actual figure is 2265, as it happens) and

  • checked whether the xkcd words are in the list ("correct" and "horse" are, "battery" and "staple" aren't)
  • generated some quadruples of random words from the list to see whether they feel stranger than the xkcd set (which, if true, would suggest that maybe he picked his by a process with less real entropy than picking words at random from a set of 2000). I got: result lie variety work; fail previously anything weakness; experienced understand relative efficiency; ear recognize list shower; classroom inflation space refrigerator. These feel to me about as strange as the xkcd set.

So I'm pretty willing to believe that the xkcd words really do have ~11 independent bits of entropy each.

In your local community's procedure, I worry about "finding a very long sentence". Making it up, or finding it somewhere else? The total number of sentences in already-existing English text is probably quite a lot less than 2^44, and I bet there isn't that much entropy in your choice of which letter to pick from each word.

Comment by gjm on [deleted post] 2020-02-07T13:25:57.490Z

This seems to have been chopped off rather early. Is it a draft accidentally released into the wild?

Comment by gjm on More writeups! · 2020-02-07T13:14:31.730Z · score: 5 (3 votes) · LW · GW

"Hexing the technical interview" is hilarious but I wouldn't really classify it as "I did X, and here's how it went".

(I hope.)

Comment by gjm on Open & Welcome Thread - February 2020 · 2020-02-06T10:03:24.985Z · score: 14 (8 votes) · LW · GW

Steven Pinker is running a general-education course on rationality at Harvard University. There are some interesting people booked as guest lecturers. Details on Pinker's website, including links that will get you to video of all the lectures (there have been three so far).

I've watched only the first, which suggests unsurprisingly that a lot of the material will be familiar to LW regulars.

Comment by gjm on Open & Welcome Thread - February 2020 · 2020-02-04T20:59:44.232Z · score: 9 (5 votes) · LW · GW

Google's AI folks have made a new chatbot using a transformer-based architecture (but a network substantially bigger than full-size GPT2). Blog post; paper on arXiv. They claim it does much better than the state of the art (though I think everyone would agree that the state of the art is rather unimpressive) according to a human-evaluated metric they made up called "sensibleness and specificity average", which means pretty much what you think it does, and apparently correlates with perplexity in the right sort of way.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-03T11:27:18.207Z · score: 25 (8 votes) · LW · GW

Your answer to your own rhetorical question is wrong, for two reasons. First, because you are confusing likelihoods with posteriors. Second, because you are confusing one-off actions with dispositions.

Likelihoods and posteriors

Yes, it's impolite to say "I think Ben isn't really trying to figure out what's true", and more polite to say "What Ben says is more like what someone says who isn't really trying to figure out what's true".

But it's also wrong to say "I think Ben isn't really trying to figure out what's true", if your actual epistemic state matches mine. Without reading the linked post, I would strongly expect that Ben is really trying to figure out what's true. On the other hand, if I had only the linked post and no other information about Ben, I would (as you obviously think I do) think that Ben is almost certainly arguing with an anti-EA bottom line already written.

But, in fact, I have both that post and other evidence that Ben is generally, let's say, truth-aligned. So what's actually going on? I don't know. So I followed the generally excellent procedure of reporting likelihood rather than posterior, and described how that post seems to me.

(I do also generally prefer to be polite, so probably the threshold for how confident I am that someone's being intellectually dishonest before saying so explicitly is higher than if my only concern was maximum-bandwidth communication. In this case, I don't believe my opinion is over where the threshold would be without concerns for politeness.)

One-off actions and dispositions

But, to be clear, I did intend to communicate that I think it genuinely possible (maybe even likely? Not sure) that on this particular occasion Ben has been arguing in bad faith.

But this is not a statement about Ben's character, it's a statement about his actions on one occasion. It is extremely common for people to do out-of-character things from time to time.

As you said above, of course "X acted in bad faith on this occasion" is evidence for "X is generally a bad-faith actor", which is a character judgement; but, as I said above, almost everything is evidence for or against almost everything, and furthermore almost everything is non-negligible evidence for or against almost everything related, and that is not good enough reason to abandon the distinctions between them.

Acting in bad faith on one occasion is not good enough evidence of a general disposition to act in bad faith for "X acted in bad faith here" to be in any way equivalent to "X is the sort of person who commonly acts in bad faith".

Clear thinking requires that we distinguish between likelihoods and posteriors. Clear thinking requires that we distinguish between one-off actions and general dispositions. Your comment about "your character assessment of Ben" ignored both distinctions. I don't think you should do that.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-01T21:58:25.172Z · score: 4 (2 votes) · LW · GW

Making a character assessment of someone is a fundamentally different thing from saying something that is Bayesian evidence about their character, for the obvious reason that saying anything is Bayesian evidence about their character.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-01T20:27:55.687Z · score: 2 (1 votes) · LW · GW

For the avoidance of doubt, I didn't make a character assessment of Ben, I made an assessment of his arguments on this point. I think those arguments are the sort of arguments people make in bad faith, but that needn't mean Ben's making them in bad faith. But he might reasonably care what it looks like; if so, he might want to take a different approach in responding to his critics.

Comment by gjm on REVISED: A drowning child is hard to find · 2020-02-01T12:37:04.472Z · score: 2 (6 votes) · LW · GW

Arguments very similar to this have been made by several people over at Ben's blog, and so far as I can make out his response has just been to dismiss them and reiterate his claim that if the numbers were as EA organizations claim then obviously they should be spending approximately all the money they have to make a big one-time reduction in communicable diseases etc.

It's also apparent from comments there that an earlier version of the post made approximately the same argument but based it on a claim that the number of cases of "communicable, maternal, neonatal and nutritional" diseases is declining at a rate of 30% per year, from which Ben evidently derived some total cost of fixing all such diseases ever to compare with e.g. the total resources of the Gates Foundation. That's a comparison that makes some sense. But after fixing that error (which, all credit to Ben, he did promptly when it was pointed out), he doesn't seem to have appreciably changed his conclusion. He's instead switched to this very-dodgy-looking comparison of annual disease-treating cost with total EA resources, left in place his conclusion that EA organizations don't really believe there are huge numbers of lives to be saved at low cost, and left in place his final conclusion that we should be spending money on ourselves and those around us rather than giving to EA causes.

Maybe I'm wrong, but all this looks to me less like the response I'd expect from someone who's simply trying to figure out what's true, and more like the response I'd expect from someone who's first decided to argue against EA, and then gone looking for arguments that might work.

Comment by gjm on how has this forum changed your life? · 2020-01-31T23:17:43.639Z · score: 10 (7 votes) · LW · GW

In case it isn't clear: the first two are both Scott; the third is a chap called Jacob Falkovich. The thing I linked to is a crosspost here of a post from his own blog. I think Jacob also has at least one other post on the theme of "what has rationality ever done for us?" Maybe I'm thinking of this one.

Also possibly worth a look, if at some point you're in critical mood: Yes, we have noticed the skulls. That one's Scott again, as so many of the best things are :-).

Comment by gjm on how has this forum changed your life? · 2020-01-31T17:54:56.569Z · score: 29 (8 votes) · LW · GW

[Edit: Turned into an answer after the OP author's reply.]

This isn't exactly an answer to your question, but here's a post from Scott Alexander in 2013 about progress LW had made in the last five years. So it doesn't have the element of personal application that you're after, but it does offer an answer of sorts to the related question "what has LW produced that is of any value?". I have a feeling there's at least one other thing on Scott's blog with that sort of flavour.

Also from Scott (from 2007) and pointing rather in the opposite direction: "Extreme Rationality: it's not that great", whose thesis is that LW-style rationality doesn't bring huge increases in personal effectiveness beyond being kinda-sorta-rational. (But the term "LW-style rationality" there is anachronistic; that post was written before Less Wrong as such was a thing.)

A counterpoint from many years later: "Is rationalist self-improvement real?", suggesting that at least for some people LW-style rationality does bring huge personal benefits, but only after you work at it for a while. I think Scott would actually agree with this.

(None of these things is exactly an answer to your question, which is why this is a comment rather than an answer, but I think all of them might be relevant.)

Comment by gjm on how has this forum changed your life? · 2020-01-31T17:23:32.670Z · score: 13 (5 votes) · LW · GW

Noted! Also noted, at the risk of passing from "appropriately wary" to "inappropriately wary": you didn't actually say that you're not planning to write a book that presents lesswrongers as weirdos to point and smile at. E.g., what you say is entirely compatible with something that begins "I've thought of myself as a rationalist all my life. Recently I discovered an interesting group of people on the internet who also call themselves rationalists. Join me as we take a journey down the rabbit-hole of how 'rationality' can lead to freezing your head, reading Harry Potter fanfiction, and running away from imaginary future basilisks."

Again, maybe I've now passed from "appropriately wary" to "inappropriately wary". But journalistic interest in the LW community in the past has usually consisted of finding some things that can be presented in a way that sounds weird and then presenting them in a way that sounds weird, and the Richlieu principle[1] means that this is pretty easy to do. I'd love to believe that This Time Is Different; maybe it is. But it doesn't feel like a safe bet.

(I should maybe add that I expect a Jon Ronson book on Those Weird Internet Rationalists would be a lot of fun to read. But of course that's the problem!)

[1] "Give me six lines written by the most honest of men, and I will find something in them with which to hang him." Probably not actually said by Richlieu. More generally: if you take a person or, still more, a whole community, and look for any particular thing -- weirdness, generosity, dishonesty, creepiness, brilliance, stupidity -- in what they've said or written, it will probably not be difficult to find it, regardless of the actual nature of the person or community.

Comment by gjm on how has this forum changed your life? · 2020-01-31T00:30:31.112Z · score: 38 (14 votes) · LW · GW

Welcome! I've greatly enjoyed some of your books. (I don't mean that the others were bad, I mean I haven't read them.)

A repeated pattern in your books is this: you identify a group of interestingly strange people, spend some time among them, and then write up your experiences in a way that invites your readers to laugh (gently and with a little bit of sympathy) at them. Is it at all possible that part of your purpose in coming here is to collect material that will help internet-rationalists join the club whose existing members include conspiracy theorists, goat-starers, and psychopaths?

Comment by gjm on If brains are computers, what kind of computers are they? (Dennett transcript) · 2020-01-30T17:45:58.593Z · score: 5 (3 votes) · LW · GW

The [inaudible] architectures are Politburo architectures. The [inaudible] former student is Bo Dahlbom.

(But the [inaudible] at 01:06:06 was deliberately inaudible, of course.)

Comment by gjm on On hiding the source of knowledge · 2020-01-30T02:37:44.699Z · score: 2 (1 votes) · LW · GW

The whole apparatus of science is about reducing the opportunities for being systematically wrong in ways you didn't test. Sure, it doesn't always work, but if there's a better way I don't think the human race has found it yet.

If knowledge is much harder to come by in domain A than in domain B, you can either accept that you don't get to claim to know things as often in domain A, or else relax what you mean by "knowledge" when working in domain A. The latter feels better, because knowing things is nice, but I think the former is usually a better strategy. Otherwise there's too much temptation to start treating things you "know" only in the sense of (say) most people in the field having strong shared intuitions about them in the same way as you treat things you "know" in the sense of having solid experimental evidence despite repeated attempts at refutation.

Comment by gjm on Mod Notice about Election Discussion · 2020-01-29T20:03:51.767Z · score: 10 (3 votes) · LW · GW

Ah, that makes some sense. It'll be interesting to see what happens, though of course the best outcome is that this announcement deters people from entering locust mode and nothing ever needs to be done, after which of course everyone says "see, there was no need to do anything!" :-).

Comment by gjm on Mod Notice about Election Discussion · 2020-01-29T11:55:27.153Z · score: 4 (3 votes) · LW · GW

I also don't recall ever seeing locust swarms here. Occasional isolated locusts, but they're OK when there are only a few. I too would be interested to know whether there are specific reasons for expecting a troublesome locust influx this time around.

Comment by gjm on On hiding the source of knowledge · 2020-01-29T02:17:18.447Z · score: 18 (6 votes) · LW · GW

Yes, I agree that there could be genuine knowledge to be had in such a case. But it seems to me that what it takes to make it genuine knowledge is exactly what the OP here is lamenting the demand for.

Suppose you practice some sort of (let's say) meditation, and after a while you become inwardly convinced that you are now aware at all times of the level of cortisol in your blood. You now try doing a bunch of things and see which ones lead to a "higher-cortisol experience". Do you have knowledge about what activities raise and lower cortisol levels yet? I say: no, because as yet you don't actually know that the thing you think is cortisol-awareness really is cortisol-awareness.

So now you test it. You hook up some sort of equipment that samples your blood and measures cortisol, and you do various things and record your estimates of your cortisol levels, and afterwards you compare them against what the machinery says. And lo, it turns out that you really have developed reliably accurate cortisol-awareness. Now do you have knowledge about what activities raise and lower cortisol levels? Yes, I think you do (with some caveats about just how thoroughly you've tested your cortisol-sense; it might turn out that it's usually good but systematically wrong in some way you didn't test).

But this scientific evidence that your cortisol-sense really is a cortisol-sense is just what it takes to make appeals to that cortisol-sense no longer seem excessively subjective and unreliable and woo-y to hard-nosed rationalist types.

The specific examples jessicata gives in the OP seem to me to be ones where there isn't, as yet, that sort of rigorous systematic modernism-friendly science-style evidence that intuition reliably matches reality.

Comment by gjm on What research has been done on the altruistic impact of the usual good actions? · 2020-01-28T14:51:29.194Z · score: 5 (3 votes) · LW · GW

These actions are mostly low-impact (in comparison with saving lives, preventing environmental catastrophe, etc.) but also low-effort and frequently-occurring. The right measure might be something like "impact per unit input" or "impact per person-year", and I suspect they then look less negligible by comparison with big-ticket effective altruism activity.

They also tend to affect people close to us about whom we care a lot. It's not at all clear what the best ways of balancing such "near" interests against those of distant strangers are (or indeed whether "what are the best ways to do that?" is a meaningful question at all) but clearly most of us, effective altruists included, in practice give much higher weight to our own welfare and that of a small number of people we care specially for, than that of random others. So if "altruistic impact" is meant to mean the same sort of evaluation as we use for malaria nets etc., it may not be the right thing to try to measure here.

Comment by gjm on Healing vs. exercise analogies for emotional work · 2020-01-28T14:42:23.672Z · score: 9 (4 votes) · LW · GW

If those things are multiplicative rather than additive, then improving one of them by 10% does make your whole life 10% better.

Obviously real life is more complicated than either a simple additive model or a simple multiplicative model. But I'd expect there to be things that operate multiplicatively. E.g., suppose you have a vitamin deficiency that means your energy levels are perpetually low; that might mean that you're doing literally everything in your life 10% worse than if that problem were solved.

(Obvious conclusion if the above is anything like right: it's worth putting some effort into figuring out which problems you have affect everything else so that making them 10% better makes everything 10% better, and which are independent of everything else so that making them 10% better makes everything 0.01% better.)

Comment by gjm on On hiding the source of knowledge · 2020-01-28T14:24:48.130Z · score: 5 (3 votes) · LW · GW

(I guess that when you wrote "piano" you meant "violin".) I agree: skills are acquired and preserved in a different way from factual knowledge, and there are mental skills as well as physical, and they may be highly relevant to figuring out what's true and what's false; e.g., if I present Terry Tao with some complicated proposition in (say) the theory of partial differential equations and give him 15 seconds to guess whether it's true or not then I bet he'll be right much more frequently than I would even if he doesn't do any explicit reasoning at all, because he's developed a good sense for what's true and what isn't.

But he would, I'm pretty sure, still classify his opinion as a hunch or guess or conjecture, and wouldn't call it knowledge.

I'd say the same about all varieties of mental metis (but cautiously, because maybe there are cases I've failed to imagine right). Practice (in various senses of that word) can give you very good hunches, but knowledge is a different thing and harder to come by.

One possible family of counterexamples: for things that are literally within yourself, it could well be possible to extend the range of things you are reliably aware of. Everyone can tell you, with amply justified confidence, whether or not they have toothache right now. Maybe there are ways to gain sufficient awareness of your internal workings that you have similar insight into whether your blood pressure is elevated, or whether you have higher than usual levels of cortisol in your bloodstream, etc. But I don't think this is the kind of thing jessicata is talking about here.

[EDITED to add:]

I wouldn't personally tend to call what-a-skilled-violin-player-has-but-can't-transfer-verbally "knowledge". I would be happy saying "she knows how to play the violin well", though. (Language is complicated.) I also wouldn't generally use the word "ideas". So (to whatever extent jessicata's language use is like mine, at least) the violin player may provide a useful analogy for what this post is about, but isn't an actual example of it, which is why I made the switch above from violinist to mathematician.

This whole discussion might be clearer with a modest selection of, say, 3-5 concrete examples of ideas jessicata has arrived at via epistemic modalities that modernist academic thinking doesn't care for, and might be tempted to justify via rigorous proof, academic sources, etc.; we could then consider what would happen to those cases, specifically, with a range of policies for what you say about where your ideas come from and why you believe them.

Comment by gjm on On hiding the source of knowledge · 2020-01-27T14:57:38.666Z · score: 5 (3 votes) · LW · GW

If you're trying to understand literal gears then a simple model that says "the amount by which this one turns equals the amount by which that one turns, measured in teeth" (or something like that) is often sufficient even though it may break down badly if you try to operate your machine at a temperature of 3000 kelvin or to run it at a million RPM.

[EDITED to add:] I think you may have misparsed the end of romeostevensit's comment. Try it like this: "Gears themselves are a black box. But, since we are rarely designing for environments at the extremes of steel's properties, we don't have to think about it."

Comment by gjm on "Future of Go" summit with AlphaGo · 2020-01-27T02:26:03.469Z · score: 2 (1 votes) · LW · GW

Yes, but not as often as I'd like and not very well.

Comment by gjm on On hiding the source of knowledge · 2020-01-26T22:51:09.948Z · score: 14 (11 votes) · LW · GW

First paragraph:

... the way I come up with ideas is ...

Third paragraph (after the bulleted list):

This risks hiding where the knowledge actually came from.

(Added emphasis mine.) It seems to me (and I guess I'm fairly typical of non-Berkeley rationalists in this) that it's 100% unproblematic to have your ideas come from Focusing, "near-psychotic experiences", Taoism, etc., but 100% problematic to claim that things that come from those are knowledge without some further evidence of a less-squishy kind.

Comment by gjm on 2018 Review: Voting Results! · 2020-01-24T18:05:39.100Z · score: 4 (2 votes) · LW · GW

Ah, OK. I'm convinced :-).

Comment by gjm on 2018 Review: Voting Results! · 2020-01-24T13:09:14.956Z · score: 16 (6 votes) · LW · GW

So one user spent 465 of their 500 available votes to downvote "Realism about Rationality".

I wonder whether that reflects exceptionally strong dislike of that post, or whether it means that they voted "No" on that and nothing on anything else, and then the -30 is just what the quadratic-vote-allocator turned that into.

I suspect the latter, and further suspect that whoever it was might not have wanted their vote interpreted quite that way. (Not with much confidence in either case.)

If a similar system is used on future occasions, it might be a good idea to limit how strong votes are made for users who don't cast many votes. Of course you should be able to spend your whole budget on downvoting one thing you really hate, but you should have to do it deliberately and consciously.

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T16:40:25.749Z · score: 4 (2 votes) · LW · GW

Fair. (Apart from the bit about having them simultaneously.) I didn't think of that because I wouldn't generally eat toast with nothing on it but butter.

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T16:39:47.895Z · score: 2 (1 votes) · LW · GW

I'm in the UK. Dairy products here are commonly pasteurized, but to me UHT means something much more extreme which spoils the flavour and I certainly wouldn't expect cream to be UHT-ed. Is cream really UHT by default in the US? Ewww.