Posts

Maybe you want to maximise paperclips too 2014-10-30T21:40:37.232Z

Comments

Comment by dougclow on Maybe you want to maximise paperclips too · 2014-10-31T07:07:11.636Z · LW · GW

Empirically we seem to be converging on the idea that the expansion of the universe continues forever (see Wikipedia for a summary of the possibilities), but it's not totally slam-dunk yet. If there is a Big Crunch, then that puts a hard limit on the time available.

If - as we currently believe - that doesn't happen, then the universe will cool over time, until it gets too cold (=too short of negentropy) to sustain any given process. A superintelligence would obviously see this coming, and have plenty of time to prepare - we're talking hundreds of trillions of years before star formation ceases. It might be able to switch to lower-power processes to continue in attenuated form, but eventually it'll run out.

This is, of course, assuming our view of physics is basically right and there aren't any exotic possibilities like punching a hole through to a new, younger universe.

Comment by dougclow on Maybe you want to maximise paperclips too · 2014-10-31T06:53:00.425Z · LW · GW

Yes, good point that I hadn't thought of, thanks. It's very easy to imagine far-future technology in one respect and forget about it entirely in another.

To rescue my scenario a little, there'll be an energy cost in transporting the iron together; the cheapest way is to move it very slowly. So maybe there'll be paperclips left for a period of time between the first pass of the harvesters and the matter ending up at the local black hole harvester.

Comment by dougclow on What false beliefs have you held and why were you wrong? · 2014-10-28T19:08:50.286Z · LW · GW

That, and/or increased sweating, and/or larger temperature gain between inspired and expired air, or wearing fewer/thinner clothes. There's lots of ways to dump heat.

I would definitely expect someone with a faster metabolism to put out more total net heat, which is measurable with difficulty, and also consume oxygen faster (and produce carbon dioxide faster) which is measurable with some difficulty, but a lot less.

Comment by dougclow on What false beliefs have you held and why were you wrong? · 2014-10-20T20:14:33.707Z · LW · GW

Therefore they must not vary all that much in terms of metabolism.

I don't think that follows, or at least not without a lot of other explanation, even if you grant that temperature doesn't vary in any significant way between people (which I'm not sure I do). The body has multiple mechanisms for maintaining temperature, of which metabolic rate is only one. It seems entirely plausible to me that people run their metabolisms at different rates and adjust their peripheral vasodilation and sweating rate to balance it all out near 37 C/98 F. Core temperature might vary between people by only a few degrees, but surface temperature varies much more widely.

Comment by dougclow on LINK: Top HIV researcher killed in plane crash · 2014-07-20T06:56:53.443Z · LW · GW

Also, they were not just AIDS researchers but AIDS activists and campaigners. The conference they were going to was expecting 12-15,000 delegates (depending on the report); it's the most prominent international conference in the area but far from the only one. As you say, a terrible loss, particularly for those close to the dead. The wider HIV/AIDS community will be sobered, but it will not be sunk. If nothing else, they coped with far higher annual death rates before effective therapies became widespread in the developed world.

The story of this story does helpfully remind us that the other 'facts' about this situation - which we know from the same media sources - may be similarly mistaken.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T15:52:19.774Z · LW · GW

Much of modern medicine involves covering up symptoms with drugs proven to do this, without understanding the underlying cause of the symptom.

What, really? There certainly is a lot of that approach around, but it's not what I think of when I think of modern medicine, as opposed to more traditional forms. Can you give examples?

Most of the ones I can think of are things that have fallen to the modern turn to evidence-based practice. The poster-child one in my head is the story of H. pylori and how a better understanding of the causes of gastritis and gastric ulcers has led to better treatments than the old symptom-relieving approaches. (And I'll tell you what, although Zantac/Ranitidine is only a symptomatic reliever, it was designed to do that job based on a thorough understanding of how that symptom comes about, and it's bloody good at it, as anyone who's had it for bad heartburn or reflux can attest.)

When I think of modern medicine, I think of things like Rituximab, which is a monoclonal antibody designed with a very sophisticated understanding of how the body's immune system works - it targets B cells specifically, and has revolutionised drug treatment for diseases like non-Hodgkin's lymphomas where you want to get rid of B cells. So much so that for some of those lymphomas, we don't have very robust 5 year survival data, because the improvement over traditional chemotherapy alone is so large that the old survival data is no use (we know people will live much longer than that), and Rituximab hasn't been widely used for long enough to get new data. In the last 25 years our understanding of cancer has gone from "it's mutations in the genes, probably these ones" to vast databases of which specific mutations at which specific locations on which specific genes are associated with which specific cancer symptoms, and how those are correlated with prognosis and treatment. And as a result cancer survival rates have improved markedly. We don't have "A Cure For Cancer", and we now know we never will, any more than we can have "A Cure For Infection", but we do have a good enough understanding of how it happens to get much better at reducing its impact.

Even modern medical disasters like Vioxx are hardly a result of a lack of understanding the underlying cause, but more us learning more about other complexities of human biology. Admittedly we don't yet fully understand how pain works, but we do know enough to know that targeting COX-2 exclusively (rather than COX-1 as well, which looks after your gut lining) would be safer for your gut. This is understanding down at the molecular level. It turns out in large scale studies that they are safer for your gut, but of course they're not very safe for your heart, so we've stopped using them. And actually doing the full-scale research on modern rationally-designed drugs like Vioxx suggests that similar old drugs (that we never bothered to test) have the same effect on hearts.

Comment by dougclow on Mechanism Design: Constructing Algorithms for Strategic Agents · 2014-05-01T14:13:49.787Z · LW · GW

Interesting stuff, thanks; looking forward to the rest of the series.

As an aside, this makes the benefits of being able to rely on trust most of the time very apparent. Jack and Jill can coordinate very simply and quickly if they trust each other to honestly disclose their true value for the project. They don't even need to be able to trust 100%, just trust enough that on average they lose no more to dishonesty than the costs of more complex and sophisticated methods of bargaining. (Which require more calculating capacity than unaided humans have evolved.)

Comment by dougclow on Positive Queries - How Fetching · 2014-05-01T12:27:16.110Z · LW · GW

I find similar techniques help with my children.

It seems closely related to the technique where, to stop them doing something you don't want them to do, you encourage them to do something else that prevents them from doing the first thing. (There's a snappy name for this that I've forgotten.) So, for example, stopping them from bothering another child by getting them interested in an entirely different activity.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-05-01T11:38:47.045Z · LW · GW

I really don't think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we've imagined in the first place. There are shedloads of robots around at the moment - the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren't autonomous, but they don't need to be if we've assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.

Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T11:03:01.926Z · LW · GW

One thing I should mention where I wasn't able to get a very good match between my own observations and mainstream science.

The Sun and the Moon are very, very close in their apparent diameter in the sky. They are almost exactly the same size. You can measure them yourself and compare, although this is a bit fiddly; I certainly got well within my own measurement errors, although those errors were large. However, you can verify it very easily and directly at the time of solar eclipses. They are so near in size that the wobbliness of the Moon's orbit means that sometimes the Sun is just-smaller than the Moon (when you get a total eclipse) and sometimes it is just-bigger (when you get an annular eclipse).

But they are very, very different in their actual size, and in their distance from the Earth. In Father Ted terms, the Moon is small and close; the Sun is large and far away. In rough terms, the Moon is 400,000 km away and 3,400 km across, and the Sun is 150m km away and 1.4m km across. You don't have to change any one of those four measurements much for them to be quite different apparent sizes from the Earth. Indeed, if you do the calculations (which I can personally attest to), if you go back far enough in time they weren't the same apparent size, and nor are they if you go forward a long way in to the future.

Why? Why this coincidence? And why is it only happening at just the times when humans are around to observe it?

So far as I know, we have no good theories apart from "it just happened to work out that way". This is pretty unsatisfying.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-05-01T10:44:39.959Z · LW · GW

An AI controlling a company like Google would be able to, say, buy up many of the world’s battle robot manufacturers, or invest a lot of money into human-focused bioengineering), despite those activities being almost entirely unrelated to their core business, and without giving any specific idea of why.

Indeed, on the evidence of the press coverage of Google's investments, it seems likely that many people would spend a lot of effort inventing plausible cover stories for the AI.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-05-01T10:40:29.160Z · LW · GW

I'll grant that "a very large proportion of the world's computing resources" was under-specified and over-stated. Sorry.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T10:38:29.193Z · LW · GW

Bedford Level Experiment [...] has the disadvantage that it shows that the Earth is flat.

I love this. As it happens, I live quite near Bedford and am terribly tempted to actually try it one day. (Edit Looking closer, turns out the Bedford Level is in Norfolk, not Bedfordshire, so a little less nearby than I thought.)

There are loads of fun ways of verifying that the Earth isn't flat. Some of these were easily available to the ancients - e.g. the shape of the shadow of the Earth on the Moon during a lunar eclipse (it's always a curve). Others are easier now than they used to be - e.g. the variations in the constellations you can see as you travel north-south (it's much easier to travel far enough to see this than it used to be).

Some, however, simply weren't available.

My favourite explanation for how we know for sure the Earth is round is that we've been up in to space and looked. You can even verify this yourself with a GoPro and a high-altitude balloon, which many hobbyists have done.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T10:26:37.075Z · LW · GW

I spent quite a lot of time many years ago doing my own independent checks on astronomy.

I started down this line after an argument with a friend who believed in astrology. It became apparent that they were talking about planets being in different constellations to the ones I'd seen them in. I forget the details of their particular brand of astrology, but they had an algorithm for calculating a sort-of 'logical' position of the planets in the 12 zodiacal signs, and this algorithm did not match observation, even given that the zodiacal signs do not line up neatly with modern constellations. They were scornful that I was unable to tell them where, say, Venus would be in 12 years time, or where it was when I was born.

So challenged, I set to.

The scientific algorithms for doing this are not entirely trivial. I got hold of a copy of Jean Meeus' Astronomical Algorithms, and it took me quite a lot of work to understand them, and then even longer to implement them so I could answer that sort of question. They are hopelessly and messily empirical (which I take as a good sign) - there is a daunting number of coefficients. Eventually I got it working, and could match observation to prediction of planetary positions to my satisfaction - when I looked at them, the planets were where my calculations said they should be, more or less.

It's hard with amateur equipment to measure accurate locations in the sky (e.g. how high and in which direction is a particular star at a particular time), but relative ones are much easier (e.g. how close is Venus to a particular star at a particular time). The gold standard for this sort of stuff is occultations - where you predict that a planet will occult (pass in front of) a star. There weren't any of those happening around the time I was doing it, but I was able to verify the calculations for other occultations that people had observed (and photographed) at the date and times I had calculated.

These days, software to calculate this stuff - and to visualise it, which I never managed - is widely available. There are many smartphone apps that will show you these calculations overlaid on to the sky when you hold your phone up to it. (Although IME their absolute accuracy isn't brilliant, which I think is due to the orientation sensors being not that good.) This makes checking these sorts of predictions very, very easy. Although of course you can't check that there isn't, say, a team of astronomers making observations and regularly adjusting the data that gets to your phone.

I was also able to independently replicate enough of Fred Espenak's NASA eclipse calculations to completely convince me he was right. (After I found several bugs in my own code.) Perhaps the most spectacular verification was replicating the calculations for the solar eclipse of 11 August 1999. I was also able to travel to the path of totality in France, and it turned up slap on time and in place. This was amazing, and I strongly urge anyone reading this to make the effort to travel to the path of totality of any eclipse they can.

Until I'd played around with these calculations, I hadn't appreciated just how spectacularly accurate they have to be. You only need a teeny-tiny error in the locations of the Sun/Moon/Earth system for the shadow cast by the moon on the Earth to be in a very different place.

I also replicated the calculations for the transit of Venus in 2004. I was able to observe it, and it took place exactly as predicted so far as I was able to measure - to within, say, 10 seconds or so. (I didn't replicate the calculations for the transit in 2012 - no time and I'd forgotten about how my ghastly mess of code worked - and I wasn't able to observe it either, since it was cloudy where I was at the time.)

More recently, you can calculate Iridium flares and ISS transits. Again, you have to be extremely accurate in calculations to be able to predict where they will occur, and they turn up as promised (except when it's cloudy). And again, there are plenty of websites and apps that will do the calculations for you. With a pair of high-magnification binoculars you can even see that the ISS isn't round.

All this isn't complete and perfect verification. But it's pretty good Bayesian evidence in the direction that all that stuff about orbits and satellites is true.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T09:48:51.515Z · LW · GW

If you are at all mathematical, you can verify that relativity affects GPS signals by calculating what difference both special relativity (satellite clock moving faster than clock on Earth, hence slower) and general relativity (satellite clock higher up the gravitational field than clock on Earth) would make to timekeeping and hence accuracy of location. The effects work against each other, but one is larger than the other.

You can verify accuracy of location of a GPS yourself. IME this is almost always considerably less accurate than published estimates by the device manufacturer, but still impressive. However, you need to be careful - most smartphones use multiple technologies to determine their location, not just GPS, so will be more accurate than the GPS signal can possibly be.

Comment by dougclow on Rebutting radical scientific skepticism · 2014-05-01T09:41:24.039Z · LW · GW

To be fair to the medieval, their theories about how one can build large, beautiful buildings were pretty sound.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-04-29T19:54:06.993Z · LW · GW

Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so?

No. I think the phrase "take over" is describing two very different scenarios if we compare "Obama trying to take over the world" and "a hypothetical hostile AI trying to take over the world". Obama has many human scruples and cares a lot about continued human survival, and specifically not just about the continued existence of the people of the USA but that they thrive. (Thankfully!)

I entirely agree that killing huge numbers of people would be a stupid thing for the actual NSA and/or Obama to do. Killing all the people, themselves included, would not only fail to achieve any of their goals but thwart (almost) all of them permanently. I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.

a long-term plan. But those are very risky. The longer it takes, the higher the chance that your plan is revealed.

Yes, indeed, the longer it takes the higher the chance that the plan is revealed. But a different plan may take longer but still have a lower overall chance of failure if its risk of discovery per unit time is substantially lower. Depending on the circumstances, one can imagine an AI calculating that its best interests lie in a plan that takes a very long time but has a very low risk of discovery before success. We need not impute impatience or hyperbolic discounting to the AI.

But here I'll grant we are well adrift in to groundless and fruitless speculation: we don't and can't have anything like the information needed to guess at what strategy would look best.

Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously.

I wouldn't say I'm taking the idea seriously either - more taking it for a ride. I share much of your skepticism here. I don't think we can say that it's impossible to make an AI with advanced social intelligence, but I think we can say that it is very unlikely to be achievable in the near to medium term.

This is a separate question from the one asked in the OP, though.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-04-28T10:03:17.578Z · LW · GW

Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?

Er, yes, very easily.

Gaining effective control of the NSA would be one route to the AI taking over. Through, for example, subtle man-in-the-middle attacks on communications and records to change the scope of projects over time, steathily inserting its own code, subtle manipulation of individuals, or even straight-up bribery or blackmail. The David Petraeus incident suggests op sec practice at the highest levels is surprisingly weak. (He had an illicit affair when he was Director of the CIA, which was stumbled on by the FBI in the course of a different investigation as a result of his insecure email practices.)

We've fairly-recently found out that the NSA was carrying out a massive operation that very few outsiders even suspected - including most specialists in the field - and that very many consider to be actively hostile to the interests of humanity in general. It involved deploying vast quantities of computing resources and hijacking those of almost all other large owners of computing resources. I don't for a moment believe that this was an AI takeover plan, but it proves that such an operation is possible.

That the NSA has the capability to carry out such a task (though, mercifully, not the motivation) seems obvious to me. For instance, some of the examples posted elsewhere in the comments to this post could easily be carried out by the NSA if it wanted to. But I'm guessing it seems obvious to you that it does not have this capability, or you wouldn't have asked this question. So I've reduced my estimate of how obvious this is significantly, and marginally reduced my confidence in the base belief.

Alas, I'm not sure we can get much further in resolving the disagreement without getting specific about precise and detailed example scenarios, which I am very reluctant to do, for the reasons mentioned above. any many besides. (It hardly lives up to the standards of responsible disclosure of vulnerabilities.)

your hypothetical artificial general intelligence

It's not mine. :-) I am skeptical of this premise - certainly in the near term.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-04-28T07:47:24.224Z · LW · GW

Another class of routes is for the AI to obtain the resources entirely legitimately, through e.g. running a very successful business where extra intelligence adds significant value. For instance, it's fun to imagine that Larry Page and Sergey Brin's first success was not a better search algorithm, but building and/or stumbling on an AI that invented it (and a successful business model) for them; Google now controls a very large proportion of the world's computing resources. Similarly, if a bit more prosaically, Walmart in the US and Tesco in the UK have grown extremely large, successful businesses based on the smart use of computing resources. For a more directly terrifying scenario, imagine it happening at, say, Lockheed Martin, BAE Systems or Raytheon.

These are not quick, instant takeovers, but I think it is a mistake to imagine that it must happen instantly. An AI that thinks it will be destroyed (or permanently thwarted) if it is discovered would take care to avoid discovery. Scenarios where it can be careful to minimise the risk of discovery until its position is unassailable will look much more appealing than high-risk short-term scenarios with high variance in outcomes. Indeed, it might sensibly seek to build its position in the minds of people-in-general as an invaluable resource for humanity well before its full nature is revealed.

Comment by dougclow on Request for concrete AI takeover mechanisms · 2014-04-28T07:20:04.581Z · LW · GW

For a fully-capable sophisticated AGI, the question is surely trivial and admits of many, many possible answers.

One obvious class of routes is to simply con the resources it wants out of people. Determined and skilled human attackers can obtain substantial resources illegitimately - through social engineering, fraud, directed hacking attack, and so on. If you grant the premise of an AI that is smarter than humans, the AI will be able to deceive humans much more successfully than the best humans at the job. Think Frank Abagnale crossed with Kevin Mitnick, only better, on top of a massive data-mining exercise.

(I have numerous concrete ideas about how this might be done, but I think it's unwise to discuss the specifics because those would also be attack scenarios for terrorists, and posting about such topics is likely - or ought to be likely - to attract the attention of those charged with preventing such attacks. I don't want to distract them from their job, and I particularly don't want to come to their attention.)

Comment by dougclow on Open thread, 18-24 March 2014 · 2014-03-20T13:54:50.936Z · LW · GW

R is free & open source, and widely used for stats, data manipulation, analysis and plots. You can get geographical boundary data from GADM in RData format, and use R packages such as sp to produce charts easily.

Or at least, as easily as you can do anything in R. I hesitate to suggest it to people who already do data work in Python (it's less ... clean) but in this sort of domain it can do many things easily that are much harder or less commonly done in Python. My impression is the really whizzy, clever stats/graphics stuff is still all about R. (See e.g. this geographic example.) There are many tutorials, some of them very good in parts, but it's famously slippery to get to grips with.

More on spatial data in R. You can also get a long way with the maps and mapdata packages.

Comment by dougclow on Calorie Restriction: My Theory and Practice · 2014-02-15T20:27:24.227Z · LW · GW

I consider myself to be "thin" even though my BMI of 24 puts me close to the official line for "overweight."

Aha! I think we've found the main source of our disagreement here, and it's purely terminology. Totally agree that maintaining a BMI around 24 is a reasonable, broadly-supported aspiration (all other factors being equal), particularly if you're younger than middle age.

this seems unlikely -- at least as the primary factor

Agreed it's probably not the largest effect, but I do think there's good reason to think there is an effect going that way. There seems to be a growing amount of evidence that low socio-economic status is bad for mortality, mostly indirectly (makes you more likely to do things like smoking, eating a diet with less fresh fruit and vegetables, etc) but also directly (low social status makes you die sooner), although of course separating that out of any naturalistic data is hard. (See e.g. this, and the older Whitehall studies.)

Comment by dougclow on Calorie Restriction: My Theory and Practice · 2014-02-15T19:42:18.114Z · LW · GW

My concern is particularly describing "thin" as healthy, or low risk for mortality. If by "thin" you mean BMI 18-25, then I'm with you, but that's officially labelled "healthy" or "normal" weight and is not what most people mean by thin. The official "underweight" category (<18) is much riskier than the official "overweight" category (25-30). The risk profile either side of official "healthy" weight is not symmetrical - and indeed there are sound reasons to think that tending towards the top end of "healthy" and in to "overweight" as you age is the least-risk track for weight over a life course.

here are many unhealthy conditions which can cause weight loss.

Indeed - eating disorders being a particularly notable group. I am concerned that erroneous messages that "thin is good and healthy" are exacerbating those problems, causing significant avoidable mortality. Thin is not good and healthy.

(You suggested "being fat puts a lot of abnormal extra strain on your system almost all the time"; I suspect being thin does too, since it means your body will your body will struggle to find sufficient metabolic resources for things like healing processes, regeneration, the immune system and cellular repair mechanisms.)

The curves on that graph for "healthy subjects who never smoked" should exclude people with unhealthy conditions and diseases that affect their weight, and show the same pattern, albeit reduced - you have to get up in to the "obese" category (plotted at BMI ~31) to get a mortality risk as high as the "underweight" one.

One might be able to make a case that there is a particular subset of underweight people who do not experience the significantly raised mortality risk that other underweight people do, but I've not (yet?) seen a convincing one.

it's reasonable to believe that there is cause and effect on the right side of the chart.

Sure - but it's not simple and one-way. One can also reasonably interpret the data to find "being low socio-economic status" as a causal factor of both higher BMI and higher mortality risk. (And of course there are also diseases that cause weight gain and increased mortality.)

I think there's a good chance that these things will improve my longevity and perhaps more importantly I think it's pretty unlikely that I will be significantly worse off for having done these things.

Absolutely - that list seems a good distillation of my understanding of what the evidence supports too.

Comment by dougclow on Useful Personality Tests · 2014-02-15T16:23:24.977Z · LW · GW

If you happen to read your horoscope, or your Myers-Briggs personality type, or any similar sort of thing, and find that it fits quite well for you, I can recommend selecting a few others, not intended for you, and see if you can make them fit you as well. You can also use this technique with a credulous friend, by reading them the 'wrong' one.

For me this works well to undo the 'magic' effect. But then that's just the sort of shenanigans you'd expect from a truth-seeking Sagittarius or 'Teacher' ENFJ.*

  • I'm not a Sagittarius and don't get ENFJ on M-B tests.
Comment by dougclow on Brainstorming: children's stories · 2014-02-15T16:09:41.038Z · LW · GW

A fun project! And one I'm trying to do for my kids.

One thing that worries me a little about trying to tell parables about these sorts of concepts is that, outside mathematical formalism, most real-world examples are not clear cut. Most fallacies, for instance, have versions that are useful real-world heuristics. Take post hoc ergo propter hoc. It is indeed strictly a fallacy to deduce that and event was caused by the event that immediately preceded it. But "What did you do differently just before it broke?" can be a really useful diagnostic question. And most of the time when you're a small kid, an adult pulling an appeal to authority on you really does know better than you do.

I'd worry less about trying to introduce abstract concepts to small kids and do more modelling/engaging/reinforcing of general curiosity, questioning, reasoning, and trying to figure things out for yourself. If they get that, they'll be able to pick up the abstract concepts for themselves, whether you are an effective teacher of them or not.

Kids seem remarkably immune - or even resistant - to adopting explicit 'morals' from stories (I know I was, and my own kids seem similar). But they do soak up general approaches and underlying values.

The best moments are when the kids ask about something. But for me it's often a fine balance between giving them the immediate answer (satisfying their curiosity and rewarding asking), and using it as an opportunity to build their ability to work things out for themselves.

Comment by dougclow on Calorie Restriction: My Theory and Practice · 2014-02-15T15:47:36.575Z · LW · GW

the observation that fat people have significantly greater mortality than thin people

That's not how I read that chart and the many similar ones showing mortality as a function of body mass index.

If, for the sake of argument, we make the (unreasonable and wrong!) assumption that the variance in mortality is caused by the variance in body mass index, it looks to me more like being fat is much less dangerous than being thin. Look at the shape of the curves as you move away from the minimum mortality trough around BMI 19-26 or so (which is slightly higher than many official guidelines for 'normal' weight). Sure, mortality increases steadily as BMI increases from the minimum, but it shoots up much more steeply as BMI decreases. Indeed, for all subjects, the BMI=17.5 bucket (the only one thinner than the minimum mortality trough) has higher mortality than the BMI=42.5 bucket (significantly fatter). To put that in concrete terms, the mortality risk for a six-foot high person (1.83m) is higher if they weigh 130 lb (59 kg, 9 stone 3 lb, BMI 17.5) than if they weigh 310 lb (142 kg, 22 stone 6 lb, BMI 42.5).

There aren't any data points in the buckets below BMI 17.5, and my guess is that's because people tend not to have BMIs in that range - or if they do it's not for very long, because they die. Of course, that's often because - contra the assumption above- causality there often clearly runs the other way: people who get terminally ill often lose an awful lot of weight before they die (cachexia).

Curves like the one you link to are a common finding. Another common finding is that the curve shifts to the right as people age - i.e. the lowest-risk BMI increases over time for an individual, and being officially 'overweight' (BMI 25-30) has lower all-cause mortality than being officially 'healthy weight' (BMI 18-25). (Can't instantly put my hand on a good example, but this study found the mortality minimum was 27.1 for people aged 70-75.)

That seems to be to be at least some evidence against the idea that to maximise your lifespan you should make your weight decrease as your life progresses.

This is very difficult stuff to get robust data about - it's awash in complex correlation/causation stuff, and getting hold of data is very hard. It's also overlaid with a lot of moralising that seems pretty unhelpful.

Comment by dougclow on Mental Subvocalization --"Saying" Words In Your Mind As You Read · 2014-02-15T14:53:52.631Z · LW · GW

I don't think I do much subvocalisation. There are certainly some words that I don't subvocalise: I often (like about once a week or a fortnight or so) have the experience of talking in person about a topic that I've previously only read and written about, and realising that I have never even tried to say key specialist vocabulary out loud, and so have no idea about how you pronounce it.

Comment by dougclow on Stupid Questions Thread - January 2014 · 2014-02-15T14:48:20.639Z · LW · GW

:)

I got mine in a large pharmacist, in case you're still looking.

How often should I apply it?

I'd be guided by the instructions on the product and your common sense.

For me, a single application is usually enough these days - so long as I've been able to leave it on for ages and not have to wash my hands. The first time I used it, when my fingernails had got very bad, it took about three or four applications over a week. Then ordinary hand moisturiser and wearing gloves outside is enough for maintenance. Then I get careless and forget and my fingernails start getting bad again and the cycle repeats! But I'm getting better at noticing, so the cycles are getting shallower, and I've not actually had to use the nail cream at all so far this winter. (Although it hasn't been a very cold one where I am.)

(Almost a month late, sorry.)

Comment by dougclow on Rethinking Education · 2014-02-15T09:35:06.040Z · LW · GW

This is the beginning of a very good idea. Happily, many, many highly-competent educational researchers have had it already, and some have pursued it to a fair degree of success, particularly in constrained domain fields (think science, technology, engineering, maths, medicine). It certainly seems to be blooming as a field again these last 5-10 years.

Potentially-useful search terms include: intelligent tutoring systems, AI in Education, educational data mining.

One particularly-nifty system is the Pittsburgh Science of Learning Centre's Datashop, which is a shared, open repository of learner interactions with systems designed to teach along these lines. The mass of data there helps get evidence of what sequence of concepts actual learners find helpful, rather than what sequence teachers think they will.

Comment by dougclow on Stupid Questions Thread - January 2014 · 2014-01-17T10:00:10.289Z · LW · GW

Yes - that's the part I too have trouble with, and that these products and practices help. They also help the nail itself, but fewer people tend to have that problem.

In my explanation should've said "Splitting/peeling nails, and troubles with the skin around them, are usually due to insufficient oil ...", sorry.

There's no reason why you should trust a random Internet person like me with health advice. But think cost/expected benefit. If your hangnails are anything like as painful and distracting as mine were, trying out a tube of nail cream, moisturiser, and a pair of gloves for a week is a small cost compared to even an outside chance that it'll help. (Unless the use of such products causes big problems for your self image.)

Comment by dougclow on Stupid Questions Thread - January 2014 · 2014-01-14T13:54:43.152Z · LW · GW

I'd be cautious about using nail polish and similar products. The solvents in them are likely to strip more oil from the nail and nail bed, which will make the problem worse, not better. +1 for asking a beautician for advice, but if you just pick a random one rather than one you personally trust, the risk is that they will give you a profit-maximising answer rather than a cheap-but-effective one.

Comment by dougclow on Stupid Questions Thread - January 2014 · 2014-01-14T13:52:43.986Z · LW · GW

To repair hangnails: Nail cream or nail oil. I had no idea these products existed, but they do, and they are designed specifically to deal with this problem, and do a very good job IME. Regular application for a few days fixes my problems.

To prevent it: Keep your hands protected outside (gloves). Minimise exposure of your hands to things that will strip water or oil from them (e.g. detergent, soap, solvents, nail varnish, nail varnish remover), and when you can't avoid those, use moisturiser afterwards to replace the lost oil.

(Explanation: Splitting/peeling nails is usually due to insufficient of oil or more rarely moisture. I've heard some people take a paleo line that we didn't need gloves and moisturiser and nail oil in the ancestral environment. Maybe, but we didn't wash our hands with detergent multiple times a day then either.)

Comment by dougclow on New Year's Prediction Thread (2014) · 2014-01-01T11:57:30.245Z · LW · GW

Here's 2013's Prediction Thread.

Comment by dougclow on Online vs. Personal Conversations · 2014-01-01T09:44:50.861Z · LW · GW

This is very much my experience too. There is also a very high variance in quality of discourse in face-to-face situations.

I think it's slightly easier to have moderate-to-high quality discussions in asynchronous online writing (assuming that's what the participants want), because you can treat stuff-you-can-Google-easily as an assumed baseline of knowledge and competence.

A silly idea I have is to model the quality of conversation as a random walk. With no boundary, you will almost-surely sink below the YouTube Comment Event Horizon as time passes. But if you have Wikipedia as a lower bound, the average quality of discussions will tend to increase over time.

Comment by dougclow on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-29T06:14:41.887Z · LW · GW

... and this is part of why my kids have always known that Santa and the Tooth Fairy are fun pretend games we play, not real. I really don't see what they're "missing out": they seem no less excited about Santa coming than other kids, and get no fewer presents.

Not lying about it has all sorts of extra benefits. It makes keeping the story straight easy. It means I'm not dreading that awkward moment when they've half-guessed the truth and ask about it outright. And I wasn't remotely tempted to tell them -as several people I know did - that the International Space Station pass on Christmas Eve was Santa on a warm-up run. Firstly, because that would mean you couldn't tell them about the ISS and how you can see it with your own eyes if you look up at the right time, and that's really cool. And secondly, because they'd have recognised it anyway.

It's also helpful social practice in behaving with integrity but respectfully when around people who passionately defend their supernatural beliefs.

Comment by dougclow on Open thread for December 9 - 16, 2013 · 2013-12-13T08:21:31.399Z · LW · GW

Another benefit for me is reduced mistakes in picking items from the list.

Some people don't use online shopping because they worry pickers may make errors. My experience is that they do, but at a much lower rate than I do when I go myself. I frequently miss minor items off my list on the first circuit through the shop, and don't go back for it because it'd take too long to find. I am also influenced by in-store advertising, product arrangements, "special" offers and tiredness in to purchasing items that I would rather not. It's much easier to whip out a calculator to work out whether an offer really is better when you're sat calmly at your laptop than when you're exhausted towards the end of a long shopping trip.

You'd expect paid pickers to be better at it - they do it all their working hours, I only do it once or twice a month. Also, all the services I've used (in the UK) allow you to reject any mistaken items at your door for a full refund - which you can't do for your own mistakes. The errors pickers make are different to the ones I would, which makes them more salient - but they are no more inconvenient in impact on average.

Comment by dougclow on How do you tell proto-science from pseudo-science? · 2013-11-28T09:13:06.417Z · LW · GW

Isn't it the aspiration of the LW community for the causation to run the other way? That is, the LW community aspires to approve of protoscience but disapprove of pseudoscience.

Comment by dougclow on The Relevance of Advanced Vocabulary to Rationality · 2013-11-28T09:04:58.052Z · LW · GW

Also, I strongly suspect there are typical mind fallacy effects at work here.

Some people can think clearly without having words in their mind, and tend to assume that of course thought is possible without language. Other people can't think at all without words, and tend to assume that of course language is required for thought.

There's also a philosophical literature on 'thought without language' that I've never got to grips with, and the associated pop-philosophy stuff that's even harder to make sense of.

Comment by dougclow on 2013 Less Wrong Census/Survey · 2013-11-28T08:09:11.496Z · LW · GW

I took the survey.

I, like many others, was very amused at the structure of the MONETARY AWARD.

I'm not sure it was an advisable move, though. There's an ongoing argument about the effect of rewards on intrinsic motivation. But few argue that incentives don't tend to incentivise the behaviour they reward, rather than the behaviour the rewarder would like to incentivise. In this instance, the structure of the reward appears to incentivise multiple submissions, which I'm pretty sure is not something we want to happen more.

In some contexts you could rely on most of the participants not understanding how to 'game' a reward system. Here, not so much, particularly since we'd expect the participants to know more game theory than a random sample of the population, and the survey even cues such participants to think about game theory just before they submit their response. Similarly, the expectation value of gaming the system is so low that one might hope people wouldn't bother - but again, this audience is likely to have a very high proportion of people who like playing games to win in ways that exercise their intelligence, regardless of monetary reward.

So I predict there will be substantially more multiple submissions this time compared to years with no monetary reward.

I'm not sure how to robustly detect this, though: all the simple techniques I know of are thwarted by using a Google Form. If the prediction is true, we'd expect more submissions this year than last year - but that's overdetermined since the survey will be open for longer and we also expect the community to have grown. The number of responses being down would be evidence against the prediction. A lot of duplicate or near-duplicate responses aren't necessarily diagnostic, though a significant increase compared to previous years would be pretty good evidence. The presence of many near-blank entries with very little but the passphrase filled in would also be very good evidence in favour of the prediction.

(I used thinking about this as a way of distracting myself from thinking what the optimal questionnaire-stuffing C/D strategy would be, because I know that if I worked that out I would find it hard to resist implementing it. Now I think about it, this technique - think gamekeeper before you turn poacher - has saved me from all sorts of trouble over my lifespan.)

Comment by dougclow on Open Thread, November 23-30, 2013 · 2013-11-25T08:37:22.468Z · LW · GW

Starting today, Monday 25 November 2013, some Stoic philosophers are running "Stoic Week", a week-long mass-participation "experiment" in Stoic philosophy and whether Stoic exercises make you happier.

There is more information on their blog.

To participate, you have to complete the initial exercises (baseline scores) by midnight today (wherever you are), Monday 25 November.

Comment by dougclow on The sun reflected off things · 2013-11-22T19:13:27.051Z · LW · GW

If that's an interesting insight for you, you might get a kick out of realising that trees come from out of the air.

Comment by dougclow on Open Thread, November 15-22, 2013 · 2013-11-20T21:27:19.874Z · LW · GW

I think this is spectacularly hard to get a robust estimate of, but my wild uninformed guess is your chances of dying of it interacting with your heart condition are less than 25%, and probably less than 5%. (I try not to pull probabilities higher or lower than 5%/95% out of the air - I need a model for that.) That's for the simple case where you don't get addicted and take ever-higher doses or start taking other stimulants too or start smoking, etc.

The only hard information I can get a handle on is that the US manufacturer lists existing cardiovascular conditions as a potential contraindication. I suspect this is on general principles (stimulants are known to make them worse, modafinil is a stimulant, sort-of) rather than on hard data about problems caused.

Reporting systems for drug side effects are haphazard and leaky at best, and it's very hard to do decent analysis. Unusual combinations that aren't very deadly just aren't going to show up in the research.The fact that we haven't heard that it's deadly does, though, put something of a ceiling on just how bad it could be (my 25% above).

Most medics would reckon taking stimulants you don't have to when you have a known cardiovascular condition is unwise. (Although some of them do it themselves in early career.) Quantifying 'unwise' is tricky. There's the general issue of data I just mentioned. Then there's trying to think it through. On the plus side, modafinil is less likely to cause problems for CV patients in the way that other more general CNS stimulants are known to; but on the minus side, we don't properly understand how it does work.

Doctors are by nature very cautious: "first, do no harm" and all that. You might come to a different cost/benefit decision.

FWIW, I wouldn't take it in your shoes. But I don't take it myself, despite having no contraindications. I'm extremely risk averse, particularly about my own life, and place more emphasis on quantity than quality compared to most people (on hedonic adaptation grounds).

Comment by dougclow on Lotteries & MWI · 2013-11-19T09:59:30.875Z · LW · GW

if the lottery has one-in-a-million odds, then for every million timelines in which you buy a lottery ticket, in one timeline you'll win it

I don't understand this way of thinking about MWI, but in a single universe, you will only win a one-in-a-million lottery one time in a million on average if you play it many, many millions of times. You can easily buy a million lottery tickets and not get a winner at 1-in-a-million odds - in fact the chances of that happening are just short of 37%. Think of how often in a "throw a six to start" game some poor player hasn't started after six or more turns.

Sums: Chance of no win in a one-off 1-in-N lottery is (N-1)/N. After N tries, the chance of no win is ((N-1)/N)^N - which astonishingly (to me) converges quite rapidly on 1/e, or just short of 37%.

(Thanks to ciphergoth for pointing the convergence out to me elsewhere.)

Comment by dougclow on Design-space traps: mapping the utility-design trajectory space · 2013-11-11T16:51:36.220Z · LW · GW

Fitness does have a relatively strong correlation with overall human utility.

I really don't think that's true, if you mean 'fitness' in the evolutionary sense. One massive counterexample is the popularity of birth control - which seems to rise as people feel better off. Evolutionary fitness is not what we, as humans, value. And a good job too, I say: evolution produces horrors and monstrosities, favouring only those things that tend to reproduce.

Comment by dougclow on Rationality Quotes November 2013 · 2013-11-08T14:40:02.385Z · LW · GW

I'm not sure that's true in general. I can think of situations where the prudent course of action is to act as fast as possible. For instance, if you accidentally set yourself on fire on the cooker, if you are acting prudently, you will stop, drop and roll, and do it hastily.

Comment by dougclow on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-08T12:14:29.454Z · LW · GW

Generally, you should not be in the habit of doing things that have a 0.1% chance of killing you. Do so on a daily basis, and on average you will be dead in less than three years

Indeed!

It's even worse than that might suggest: 0.999^(3*365.25) = 0.334, so after three years you are almost exactly twice as likely to be dead than alive.

To get 50%, you only need 693 days, or about 1.9 years. Conversely, you need a surprising length of time (6500 days, about 17.8 years) to reduce your survival chances to 0.001.

The field of high-availability computing seems conceptually related. This is often considered in terms of the number of nines - so 'five nines' is 99.999% availability, or <5.3 min downtime a year. It often surprises people that a system can be unavailable for the duration of an entire working day and still hit 99.9% availability over the year. The 'nines' sort-of works conceptually in some situations (e.g. a site that makes money from selling things can't make money for as long as it's unavailable). But it's not so helpful in situations where the cost of an interruption per se is huge, and the length of downtime - if it's over a certain threshold - matters much less than whether it occurs at all. There are all sorts of other problems, on top of the fundamental one that it's very hard to get robust estimates for the chances of failure when you expect it to occur very infrequently. See Feynman's appendix to the report on the Challenger Space Shuttle disaster for amusing/horrifying stuff in this vein.

Very big and very small probabilities are very very hard.

Comment by dougclow on New vs. Business-as-Usual Future · 2013-11-06T10:04:10.621Z · LW · GW

I wonder whether some of the inferential distance here is around what is understood by 'the human experience'.

Materially, the human experience has changed quite profoundly, along the lines Vaniver points out (dramatic improvements in life expectancy, food supply, mechanisation, transport and travel, and so on).

Subjectively, though, the human experience has not changed much at all: the experience of love, loss, fear, ambition, in/security, friendship, community, excitement and so on seems to have been pretty much the same for humans living now as it was for humans living as far back as we have written records, and almost certainly well before that. Certainly when I read historical accounts I'm often struck with how similar the people seem to me and people I know, even when they are living in very different circumstances. This, I'm guessing, is what katydee is getting at.

So the human subjective experience of, say, having an immediate family member die has not changed fundamentally, but the rate at which humans have that experience that has changed fundamentally.

(Reflecting on this makes me feel very, very glad indeed to live now rather than at any time in the past. For instance, Darwin seems to have been as besotted by his kids as I am by mine, and I expect I'd be just as upset as he was were one of my children to die of scarlet fever, but it's extremely unlikely to happen to me - or indeed anyone I know - because it's almost always very easy to treat now. This has knock-on effects too: I get nervous and worried whenever my kids get ill, but nothing like as nervous and worried as he did, because I know that the chances that they'll die are so much lower.)

I suspect this latter change in the human experience is what is meant by most of the people saying that it has changed.

Comment by dougclow on From Philosophy to Math to Engineering · 2013-11-04T17:00:28.176Z · LW · GW

Guy on the right is Markus Kalisch.

Not sure about the one on the left - outside chance it's Bertrand Russell but probably not.

Comment by dougclow on Halloween thread - rationalist's horrors. · 2013-11-04T12:30:23.906Z · LW · GW

One evening, when I was in my mid-teens, my parents had gone out and were due back very late. For story-unrelated reasons there was a lot of tension, nervousness and worry in the household at that time. My younger brothers went to bed, and I stayed up a bit watching the film Cat's Eye, a mild horror film written by Stephen King.

In the final part of the film, a girl is threatened by a vicious troll, a short, ugly, nasty creature with a dagger. It repeatedly creeps in to her bedroom in the night, first slaughtering her pet parrot, and then trying to kill her by sucking her breath out. She's defended by a stray cat, but unfortunately when her parents come in, there's no sign of the troll, only the cat, so the parents don't believe her and blame the luckless animal for the mayhem.

While I was watching this, one of my brothers came in from his bedroom, clearly upset. He'd heard something creeping in to his bedroom, first opening the door, then walking across the floor. He was scared. I instantly thought of the vicious troll from the film, but with my rational brain knew it couldn't possibly be that. I also knew he hadn't seen the film. So I tried to reassure him, and talked about how the house makes noises in the floorboards when the central heating turns off - which had just happened. He wasn't remotely convinced: he knew fine what the usual house-settling noises were, and this was something different. It was something with feet, and small, no more than a foot tall.

I was a bit creeped out, but as the older brother put on a brave, reassuring face and came with him in to his bedroom and searched it thoroughly. We found nothing. With a bit of persuasion he went back to bed. I went back to the film.

About fifteen minutes later he came back, absolutely terrified. The thing, whatever it was, had come back, opened his door, and walked around on its little feet. It totally wasn't the house settling, it was footsteps. I wondered whether he'd overheard or seen the film, and was imagining the troll, but I was pretty sure he hadn't. He was convincing: he wasn't the sort to get that upset at something wholly imaginary, and was able to give clear detail about what he had heard when questioned. So by now I was really quite creeped out. With my rational brain I knew that the vicious troll couldn't be real and in our house, but there was clearly something going on. My emotions were running pretty high, and I really didn't want to take on the role of the wrongly-unbelieving parents from the film. Which of course made me pretty unconvincing at reassuring my poor brother. I went with him to check his bedroom, and again we found nothing.

He was too scared to sleep on his own, so I stayed with him. If anything does come in, it'll have to come past me first, and I'm pretty tough and I'll be ready, I told him with the best teenage bravado I could muster. Of course, nothing happened with me on watch, and eventually, he fell asleep.

It was my own bedtime by then, so I got myself ready for bed and locked the doors and turned off all the lights except the porch and hall lights for my parents' return. That in itself was slightly spooky, which didn't help.

I lay down in bed and turned off the bedside light. My mind was still racing, but eventually I found myself starting to get a little sleepy.

Suddenly, I was wide awake and awash in serious adrenaline reaction. My bedroom door had just opened an inch or two, and my body was in full-on fight-or-flight-or-freeze mode. I froze. Had I imagined it, in a going-to-sleep sort of way? No: as I watched in horror, the door opened another couple of inches. I'd been in the dark long enough that my eyes were fully dark-adapted, and from where I was lying in bed, I could see the doorway from about a foot high upwards, dimly but distinctly backlit from the hall light, and there was nothing there. Whatever had opened the door was less than a foot tall. So definitely not my parents coming home and checking on me, then. Now I was really scared. My hyper-alert state led to massive subjective time dilation: all this took only a few seconds, but it felt like minutes.

It got worse. I heard footsteps. Small but quite distinct footsteps. Nothing remotely like the house settling. The sort of footsteps something less than a foot high would make. Exactly like my brother had described. Exactly like the vicious troll. Whatever it was stopped for a moment. I could hardly breathe.

Then it started again, clearly walking towards me in my bed. I'm not sure I've ever been as scared as I was at that moment.

Rationally, I knew it couldn't be a vicious troll come to kill me, but emotionally I was certain of it. I thought furiously, taking advantage of the extra subjective time. Whatever it was, I wasn't going to just lie there and let it do whatever it wanted. I sized up my situation. I had no obvious weapons or things-that-could-be-weapons to hand or in easy reach, but on the plus side, I was clearly much bigger than it was, and reasonably fit and strong. Whatever it was clearly intended to surprise me in my bed, but I reckoned I could seize the tactical advantage by surprising it. So far I'd just lain there silently, as if asleep. I decided to seize the initiative and confront it in a rush. This was classic battlefield thinking: under desperate pressure, I didn't seek and evaluate alternatives, I just quickly checked over the first plan that came in to my mind, and although it didn't seem great, it seemed better than doing nothing, so I went for it. I visualised what I would do, got my muscles ready, then moved. I leapt out of bed, hurling off the blankets in the direction of the thing, and roared as loudly as I could as I charged towards it.

Bhe bja ubhfrubyq png unq pbzr va gb gur orqebbz ybbxvat sbe fbzrjurer jnez gb frggyr qbja sbe n anc. Ur jnf nofbyhgryl greevsvrq ol guvf qvfcynl, ghearq gnvy, naq syrq.

Comment by dougclow on What should normal people do? · 2013-10-25T17:48:32.287Z · LW · GW

Play to your strengths; do what you're best at. You don't have to be best in the world at it for it to be valuable.

Good things about this advice are (a) it has a fairly-sound theory behind it (Comparative advantage), and (b) it applies whether or not you're smart, normal or dumb, so you don't get in to socially-destructive comparisons of intelligence.