Posts

Comments

Comment by William_Newman on Horrible LHC Inconsistency · 2008-09-22T19:16:24.000Z · LW · GW

One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit. One of the consequences of decades of focus on the physics of radiation and radioisotopes is that we understand hazards like radon poisoning better than before. One of the consequences of all of our recombinant DNA experimentation is that we understand risks of nature's own often-mindboggling recombinant DNA work much better than we did before.

The main examples that I can think of where the first thing you learn, when you tickle the tail enough to notice the tail exists, is that Tigers Exist And Completely Outclass You And Oops You Are Dead, involve (generalized) arms races of some sort. E.g., it was by blind luck that the Europeans started from the epidemiological cesspool side of the Atlantic. (Here the arms race is the microbiological/immunological one.) If history had been a little different, just discovering the possibility that diseases were wildly different on both sides could easily have coincided with losing 90+% of the European population. (And of course as it happened, the outcome was equally horrendous for the American population, but the American population wasn't in a position to apply the precautionary principle to prevent that.) So should the Europeans have used a precautionary principle? I think not. Even in a family of alternate histories where the Europeans always start from the clean side, in many alternate subhistories of that family, it is still better for the Europeans to explore the Atlantic, learn early about the problem, and prepare ways to cope with it. Thus, even in this case where the tiger really is incredibly dangerous, the precautionary principle doesn't look so good.

Comment by William_Newman on Two Cult Koans · 2007-12-23T16:41:00.000Z · LW · GW

Eliezer Yudkowsky wrote of ideas one can't see the value of, and teachers who don't seem to understand their teachings, "Sounds like either a cult or a college."

I dunno, at least for many technical fields and for some other endeavors too (like learning to communicate effectively in writing) one can see that many of the teachers can do some handy hard-to-fake real-world stuff, and that the students emerging through the pipeline tend to be able to do it too. When I was an undergraduate, the EEs in my residence hall traditionally maintained a little hand-made custom-programmed telephone PBX which ran from the two college official phone jacks in the lobby to a motley collection of old salvaged telephones in most of the other rooms. I, at least, was impressed. If you're in an organization where the initiates routinely levitate out their windows to go to lunch, and levitate some more whenever they have trouble finding a convenient chair, is it a mystical cult because levitation or funny hats or even confusing explanations are involved, or might it be unusually successful pragmatic applied philosophy?

Once stretched to cover everything from incompetent posers to arrogant weird competent people (like Isaac Newton at the hypercompetent extreme, or various academics in a less extreme way), a concept like "cult" may not be all that valuable. Perhaps there is value in reminding us that part of the reason the posers can gull people with their behavior is that it's not so uncommon for non-posers to act in some similar way. But there is also value in to reminding people that part of the reason speculative bubbles can happen is that price moves based on fundamentals can look similar enough to gull speculators into mistaking a bubble for one one. That doesn't mean we should think of every big price move as a "bubble" (or as being bubble-ish, or whatever). We might say "every big price move wants to be a bubble," but saying a market situation where the fundamentals don't make sense "sounds like a bubble or an ordinary market" would seem to me to be missing a point.

Comment by William_Newman on The Simple Math of Everything · 2007-11-20T21:48:36.000Z · LW · GW

If you ever get as seriously curious about electronics as you were about physics, look at Horowitz and Hill, The Art of Electronics. Very very useful for someone who already knows the math and wants to understand electronics principles and the practicalities of one-off discrete circuit design.

Comment by William_Newman on Beware of Stephen J. Gould · 2007-11-09T20:18:17.000Z · LW · GW

Yeah, what Adam Ierymenko said:-) about hitting a complexity limit being not at all synonymous with stopping progress. Except that I was going to say "computer programmers" instead of "engineers", and I was going to use the example that when duplicate functionality in the mitochondrial genome and main-cell genome gets replaced by shared functionality, the organism tends to win back some ground from the the Williams limit you described. And, incidentally, the mitochondrial example is very closely analogous to something that practicing computer programmers pay a lot of attention to: Google for "once and only once" or "OAOO" to see endless discussion.

Comment by William_Newman on Universal Fire · 2007-04-28T15:56:48.000Z · LW · GW

I don't see the problem. There seems to be no logical reason that local laws can't change because of arbitrarily complicated nonlocal rules. You can even see nontrivial examples of this in practice in some modern technology. Various of Microsoft's operating systems have reportedly contained substantial amounts of code to recognize particular usage patterns characteristic of particular old applications, and change the rules so the old application continues to work even though it depends on old behavior which has otherwise disappeared from the new operating system. Vaguely-similar principles of global patterns changing local decision rules also appear, in less-nauseating ways, in all sorts of software for solving hard optimization problems (optimizing compilers, finding the optimum move in Chess, finding the optimum schedule for a big logistics operation...). What would go impossibly wrong if you rewrote physics with added rules which recognize patterns characteristic of presence or absence of patterns (like "living organism" and "magical incantation") and which rejigger the local rules as a consequence?

Changing the local rules specifically to stomp out technology without making the rest of the universe's behavior unrecognizable is a tricky job, since you are correct that everything tends to be cross-coupled in weird ways. But I think one could at least make existing technology pretty frustrating. One way to start would be by making a list of a hundred or a thousand technogically useful patterns (things heating up to combustion temperature, things bending around a fulcrum, sizable things rotating or oscillating many many times without changing shape, lots of energy being stored for a long time in an elastic object) and make case by case hacks to damp them out (spontaneously cooling things when they rise above 100 degrees Celsius, letting the lever soften and bend, etc.) whenever they weren't preceded by the suitably magically approved pattern of causality. (So, e.g., you can light a fire with a spell, and perhaps by striking suitably hard objects against each other, but not with a match or a magnifying glass. And you can use hinges as long as they are between bones in a living organism.) The result would be a very weird universe, but if I remember correctly (from long, long ago), the universe in those books was supposed to be very weird anyway.

Comment by William_Newman on Tsuyoku Naritai! (I Want To Become Stronger) · 2007-03-28T15:06:22.000Z · LW · GW

There's no particular reason that constant improvement needs to surpass a fixed point. In theory, see Achilles and the tortoise. In practice, maybe you can't slice things infinitely fine (or at least you can't detect progress when you do), but still you could go on for a very long time incrementally improving military practice in the Americas while, without breakthroughs to bronze and/or cavalry, remaining solidly stuck behind Eurasia. More science fictionally, people living beneath the clouds of Venus could go for a long time incrementally improving their knowledge of the universe before catching up with Babylonian astronomy, and if a prophet from Earth brought them a holy book of astronomy, it could remain a revelation for a very long time. Or if the Bible had included a prophesy referring to "after three cities are destroyed with weapons made of metals of weight 235 and 239," it would've remained utterly opaque through centuries of rapid incremental progress.

I think a related argument would be more convincing: collect incidents when people thought they knew something about the real world from a religious tradition, and it conflicted with what the scientists were coming to believe, and compute a batting average. If the batting average is not remarkably high for the religious side, some skepticism about its reliable truth is called for, or at least some diplomatic dodge like "how to go to heaven, not how the heavens go."

The batting average could suffer from selection bias if the summaries tend to be written by one side. But even if so, it's sorta interesting indirect evidence if all the summaries tend to be written by one side. And I dimly remember that there are pro-Islam writers who go on about the scientific things that their religious tradition got right, so I don't think there's any iron sociological law that keeps the religious side from writing up such summaries.

Comment by William_Newman on Blue or Green on Regulation? · 2007-03-16T02:25:06.000Z · LW · GW

Note that when someone reads your "if people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold" it does sound rather as though you're making a point about market decisions in particular, not just one of a spectrum of points like "if people have a right to vote for stupid policies, then ambitious politicians will supply all the stupid policies that people can be convinced to vote for." Also, it's not too uncommon for people to play rhetorical (and perhaps internal doublethink) games where people's rationality in market decisionmaking is judged differently than in politics.

Similarly, you could state specifically "when we let members of someparticularethnicgroup vote, they often make uninformed decisions," and believe yourself logically justified by the general truth that they are people and when we let people of any ethnic group vote, they often make uninformed decisions. But I don't recommend that you try making that statement, especially about an ethnic group where it's not too uncommon for people to dump on them in particular, unless you're prepared to raise many, many more hackles than you would by just stating your general point about letting people in general vote.

And even though I give the parallel of another politically charged statement, I don't think this is just people getting irrational around politically charged issues. In ordinary, un-charged situations too, it is normal for people to choose reasonably general forms of their statements when possible, so if you make a narrow statement, it conveys a suggestion that a more general statement doesn't hold. "It's really cold in the living room" means in practice something like "the living room is colder than the rest of the house" or "I am physically unable to leave the living room and don't know about the rest of the house," not "it's really cold in the house."

It's not a completely reliable conversational rule, and it's probably one of the reasons that some wag said that "communication would be more reliable if people would turn off the gainy decompression," but it's not obviously an unimportant or silly rule, either. In fact, if I imagine designing cooperating robotic agents with very powerful brains, very sophisticated software, and very low communication bandwidth, I'd be very inclined to borrow the rule.