Posts

Comment by chronos on Procedural Knowledge Gaps · 2011-02-25T02:05:42.726Z · score: 3 (3 votes) · LW · GW

According to Wikipedia, the threshold for fibrillation is 60 mA for AC, 300-500 mA for DC. On reflection, it seems I'd previously cached the AC value as the value for all currents, so that was skewing my argument.

Given these figures, a 1k Ohm total resistance (internal plus skin plus body) would lead to a 12 mA current (painful but not fibrillation-inducing), whereas 200 Ohms / 40 Ohms total resistance would be required for 12 VAC / VDC to be potentially lethal. So, yeah, now that I think about it, a car battery probably couldn't be lethal unless conductors were actually puncturing the skin and touching the bloodstream directly (or covering a HUGE amount of surface area). I retract my claim.

Edit: OH! Except that Wikipedia says the threshold for fibrillation is a mere 10 µA if the current is from electrodes that establish a circuit through the heart. THAT's the figure I'd seen before and cached in my head. Still, that's not a likely situation to arise when using jumper cables, so my claim remains retracted.

Comment by chronos on Procedural Knowledge Gaps · 2011-02-24T17:37:51.319Z · score: 2 (2 votes) · LW · GW

It's worth noting that the reason we use clamps on the ends of the jumper cables is because pressure increases surface area in contact, which decreases resistance for the simple reason of Ohm's law applied to parallel resistors. (Three 1k Ohm resistors have a parallel resistance of only 333 Ohms. It's meaningless to give a single figure for copper -> wet skin resistance without also giving the surface area for which the figure is valid.)

This means that incidental touching of metal is extremely unlikely to kill anyone, but accidentally clamping your finger, gripping metal tightly, or anything else that applies pressure to your skin will dramatically raise the risk.

Comment by chronos on Procedural Knowledge Gaps · 2011-02-24T17:30:50.380Z · score: 1 (1 votes) · LW · GW

It does if the skin is wet. Once you're through the skin, the human body's resistance is quite low, in the single-digit kiloohm range at most, because the human body is mostly salt water (a fantastically good conductor by non-metallic standards). The biggest barrier to current is the upper layer of dead, dry cells on the epidermis. And lead-acid batteries have a fairly low internal resistance, which allows them to produce high currents if the load is also low resistance (a required feature when cranking the engine).

Comment by chronos on Procedural Knowledge Gaps · 2011-02-21T09:57:12.011Z · score: 0 (0 votes) · LW · GW

It's worth noting that, while 12 volts won't normally penetrate dry skin under most humidity conditions, you really do need to be careful. Pressure increases surface-to-surface contact, which decreases resistance, which lowers the voltage threshold. So can moisture, like even small amounts of sweat. And a car battery does have sufficient current to injure or kill a human being quite easily. (Voltage penetrates insulators, current actually does damage. The zap you get from static electricity is in the range of thousands of volts, but the current is negligible.)

Comment by chronos on Procedural Knowledge Gaps · 2011-02-21T09:47:10.161Z · score: 0 (0 votes) · LW · GW

I was taught a slightly different procedure, which is the same as the one listed as the first result on Google for "jumper cables":

1. Line up the cars, pop the hood on both cars, get out the jumper cables, make sure both cars have their engines turned off, check that the dead battery looks safe (no cracks, leaks, or swelling), and try to scrape off any corrosion on the terminals.
2. Connect one red clip to the positive (+) terminal of the dead battery.
3. Connect the other red clip to the positive (+) terminal of the good battery.
4. Connect one black clip to the negative (-) terminal of the good battery.
5. Connect the other black clip to the exposed metal of the engine or chassis of the car with the dead battery. The chassis is connected to the negative terminal ("grounded"), so this will complete the circuit while minimizing sparks near the battery itself. A malfunctioning battery might be venting fumes of flammable/explosive hydrogen gas, so don't risk sparks near the battery.
6. Start the "donor" car. Let it run for a minute or two.
7. Start the "acceptor" car. It should crank and run normally.
8. Disconnect the cables in the reverse order (undo steps 5, 4, 3, 2). If the order is reversed exactly, then the cables can be disconnected from the two running cars with no sparking near the battery. You'll get some sparks when you disconnect from the chassis, but that's OK.
9. Wait a few minutes (3 to 5). The acceptor car should continue to run. If it dies a few minutes after disconnecting the cables, then it's a problem with the alternator and not just the battery.
10. Put the cables away, close the hoods, and thank the owner of the donor car (who can now leave).
11. Leave the acceptor running for a while. You can drive it as much as you like during this period; just don't shut off the engine until the alternator has had time to recharge the battery (say, 10 to 15 additional minutes).

The site I linked to makes the point that steps 6-7-8 in my procedure can damage the acceptor's alternator. It recommends letting the donor run for a bit longer than my step 6 requires, then (8a) shutting off the donor, (8b) disconnecting the cables entirely, and only then (7) starting the acceptor. Whether or not this method works would depend on the state of the battery (it may fail for a poor but working battery) and the weather (it may fail below, say, 10°F / -12°C).

(Note: lead-acid batteries are damaged by letting them discharge fully, because the cathodes and anodes are both transformed into the same material, lead sulfate. Once that happens, it becomes far more difficult to recharge the battery and you're better off just buying a new one. Even if your battery won't take a charge, a jump start can get you to a store that sells new automotive batteries -- the battery is only needed to turn the engine through the first few cycles, and the alternator will provide all needed electricity once the engine is turning fast enough.)

Comment by chronos on Procedural Knowledge Gaps · 2011-02-21T08:51:21.387Z · score: 9 (9 votes) · LW · GW

Washing bacteria down the drain is certainly the primary purpose for using soap, by far, but surfactants like soap also kill a few bacteria by lysis (disruption of the cell membrane, causing the cells to rapidly swell with water and burst). In practice, this is so minor it's not worth paying attention to: bacteria have a surrounding cell wall made of a sugar-protein polymer that resists surfactants (among other things), dramatically slowing down the process to the point that it's not practical to make use of it.

(Some bacteria are more vulnerable to surfactant lysis than others. Gram-negative bacteria have a much thinner cell wall, which is itself surrounded by a second, more exposed membrane. But gram-positive bacteria have a thick wall with nothing particularly vulnerable on the outside, and even with gram-negative bacteria the scope of the effect is minor.)

In practice, the big benefit of soap is (#1) washing away oils, especially skin oils, and (#2) dissolving the biofilms produced by the bacteria to anchor themselves to each other and to biological surfaces (like skin and wooden cutting boards). Killing the bacteria directly with soap is a distant third priority.

For handwashing, hot water is in a similar boat: even the hottest water your hands can stand is merely enough to speed up surfactant action, not to kill bacteria directly. For cleaning inanimate surfaces, sufficiently hot water is quite effective at killing bacteria, but most people's hot water only goes up to 135°F or thereabouts, which is not scaldingly hot enough to do the job instantly.

For directly killing bacteria via non-heat means, alcohol and bleach are both far more effective than soap. Alcohol very rapidly strips off the cell wall and triggers immediate lysis, while bleach acts both as a saponifier (it turns fatty acids into soap) and a strong oxidizer (directly attacking the chemical structure of the cell wall and membrane, ripping it apart like a rapid-action biological parallel to rusting iron).

Fun trivia: your hand feels slippery or "bleachy" after handling bleach (or any reasonably strong base) because the outermost layer of your skin has been converted into soap.

Comment by chronos on Confidence levels inside and outside an argument · 2010-12-22T12:37:52.878Z · score: 12 (12 votes) · LW · GW

I'm a bit irked by the continued persistence of "LHC might destroy the world" noise. Given no evidence, the prior probability that microscopic black holes can form at all, across all possible systems of physics, is extremely small. The same theory (String Theory[1]) that has led us to suggest that microscopic black holes might form at all is also quite adamant that all black holes evaporate, and equally adamant that microscopic ones evaporate faster than larger ones by a precise factor of the mass ratio cubed. If we think the theory is talking complete nonsense, then the posterior probability of an LHC disaster goes down, because we favor the ignorant prior of a universe where microscopic black holes don't exist at all.

Thus, the "LHC might destroy the world" noise boils down to the possibility that (A) there is some mathematically consistent post-GR, microscopic-black-hole-predicting theory that has massively slower evaporation, (B) this unnamed and possibly non-existent theory is less Kolmogorov-complex and hence more posterior-probable than the one that scientists are currently using[2], and (C) scientists have completely overlooked this unnamed and possibly non-existent theory for decades, strongly suggesting that it has a large Levenshtein distance from the currently favored theory. The simultaneous satisfaction of these three criteria seems... pretty f-ing unlikely, since each tends to reject the others. A/B: it's hard to imagine a theory that predicts post-GR physics with LHC-scale microscopic black holes that's more Kolmogorov-simple than String Theory, which can actually be specified pretty damn compactly. B/C: people already have explored the Kolmogorov-simple space of post-Newtonian theories pretty heavily, and even the simple post-GR theories are pretty well explored, making it unlikely that even a theory with large edit distance from either ST or SM+GR has been overlooked. C/A: it seems like a hell of a coincidence that a large-edit-distance theory, i.e. one extremely dissimilar to ST, would just happen to also predict the formation of LHC-scale microscopic black holes, then go on to predict that they're stable on the order of hours or more by throwing out the mass-cubed rule[3], then go on to explain why we don't see them by the billions despite their claimed stability. (If the ones from cosmic rays are so fast that the resulting black holes zip through Earth, why haven't they eaten Jupiter, the Sun, or other nearby stars yet? Bombardment by cosmic rays is not unique to Earth, and there are plenty of celestial bodies that would be heavy enough to capture the products.)

[1] It's worth noting that our best theory, the Standard Model with General Relativity, does not predict microscopic black holes at LHC energies. Only String Theory does: ST's 11-dimensional compactified space is supposed to suddenly decompactify at high energy scales, making gravity much more powerful at small scales than GR predicts, thus allowing black hole formation at abnormally low energies, i.e. those accessible to LHC. And naked GR (minus the SM) doesn't predict microscopic black holes. At all. Instead, naked GR only predicts supernova-sized black holes and larger.

[2] The biggest pain of SM+GR is that, even though we're pretty damn sure that that train wreck can't be right, we haven't been able to find any disconfirming data that would lead the way to a better theory. This means that, if the correct theory were more Kolmogorov-complex than SM+GR, then we would still be forced as rationalists to trust SM+GR over the correct theory, because there wouldn't be enough Bayesian evidence to discriminate the complex-but-correct theory from the countless complex-but-wrong theories. Thus, if we are to be convinced by some alternative to SM+GR, either that alternative must be Kolmogorov-simpler (like String Theory, if that pans out), or that alternative must suggest a clear experiment that leads to a direct disconfirmation of SM+GR. (The more-complex alternative must also somehow attract our attention, and also hint that it's worth our time to calculate what the clear experiment would be. Simple theories get eyeballs, but there are lots of more-complex theories that we never bother to ponder because that solution-space doesn't look like it's worth our time.)

[3] Even if they were stable on the order of seconds to minutes, they wouldn't destroy the Earth: the resulting black holes would be smaller than an atom, in fact smaller than a proton, and since atoms are mostly empty space the black hole would sail through atoms with low probability of collision. I recall that someone familiar with the physics did the math and calculated that an LHC-sized black hole could swing like a pendulum through the Earth at least a hundred times before gobbling up even a single proton, and the same calculation showed it would take over 100 years before the black hole grew large enough to start collapsing the Earth due to tidal forces, assuming zero evaporation. Keep in mind that the relevant computation, t = (5120 × π × G^2 × M^3) ÷ (ℏ × c^4), shows that a 1-second evaporation time is equal to 2.28e8 grams[3a] i.e. 250 tons, and the resulting radius is r = 2 × G × M ÷ c^2 is 3.39e-22 meters[3b], or about 0.4 millionths of a proton radius[3c]. That one-second-duration black hole, despite being tiny, is vastly larger than the ones that might be created by LHC -- 10^28 larger by mass, in fact[3d]. (FWIW, the Schwarzschild radius calculation relies only on GR, with no quantum stuff, while the time-to-evaporate calculation depends on some basic QM as well. String Theory and the Standard Model both leave that particular bit of QM untouched.)

[3a] Google Calculator: "(((1 s) h c^4) / (2pi 5120pi G^2)) ^ (1/3) in grams"

[3b] Google Calculator: "2 G 2.28e8 grams / c^2 in meters"

[3c] Google Calculator: "3.3856695e-22 m / 0.8768 femtometers", where 0.8768 femtometers is the experimentally accepted charge radius of a proton

[3d] Google Calculator: "(2.28e8 g * c^2) / 14 TeV", where 14 TeV is the LHC's maximum energy (7 TeV per beam in a head-on proton-proton collision)

Comment by chronos on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-19T23:58:56.610Z · score: 1 (1 votes) · LW · GW

I'm afraid I can't say much beyond what I've already said, except that Google places a fairly high value on detecting fraudulent activity.

I'd be surprised if I discovered that no bad guys have ever tried to simulate the search behavior of unique users. But (a) assuming those bad guys are a problem, I strongly suspect that the folks worried about search result quality are already on to them; and (b) I suspect bad guys who try such techniques give up in favor of the low hanging fruit of more traditional bad-guy SEO techniques.

Comment by chronos on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-15T22:33:08.031Z · score: 10 (10 votes) · LW · GW

I think it's interesting to note that this is the precise reason why Google is so insistent on defending its retention of user activity logs. The logs contain proxies under control of the end user, rather than the content producer, and thus allow a clean estimate of (the end user's opinion of) search result quality. This lets Google spot manipulation after-the-fact, and thus experiment with new algorithm tweaks that would have counterfactually improved the quality of results.

(Disclaimer: I currently work at Google, but not on search or anything like it, and this is a pretty straightforward interpretation starting from Google's public statements about logging and data retention.)

Comment by chronos on Hayekian Prediction Markets? · 2010-02-18T03:49:07.822Z · score: 2 (2 votes) · LW · GW

And while some of their costs are borne by others, a lot of their taxes going to roads are also wasted.

This doesn't make sense, because dollars are fungible. If WM reaps a greater monetary value from the highway system than it spends on the highway system via taxes, WM comes out ahead.

So I don't see how this is an indictment of WM -- the harm lies in the shift of the structure of production to a less efficient one, not in a transfer of wealth to the Waltons.

Then we're in violent agreement. I didn't intend the highway bit to be an indictment of WM, but a rebuttal of taw's comment:

"And yet, in spite of the genuine diseconomies of scale which you mention, economies of scale for Wall-Mart seem ever larger, as it successfully competes in open market"

I was attempting to convey the idea that that Wal-mart's current (but quite likely ephemeral) success is due to political accidents moreso than "economies of scale". The only "economy of scale" operating at Wal-mart is logistics and trucking, which doesn't scale very much: the planning scales somewhat, the trucking has already scaled as far as it can, and the trucking is on more precarious footing than it looks.

Labor doesn't scale: making a Wal-mart store twice as big requires twice as many workers to keep the shelves full.

Sales don't scale: selling twice as many goods provides economies of scale to the manufacturers, not to Wal-mart itself. If manufacturing economies of scale were at play, all retail prices would fall to equal those of Wal-mart: with their new infrastructure paid for, the manufacturers can turn around and sell their cheaper products to Wal-mart's competitors just as easily as they can sell to Wal-mart.

The oligopsony price bullying (i.e. the Vlasic example) is not a proper "economy of scale" in this sense. If Wal-mart had a competitor of equal size, but Wal-mart's size remained unchanged, Wal-mart's economies of scale would be unchanged but its power to bully costs down would weaken. An economy of scale depends on size, not on market power.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-17T07:41:11.168Z · score: 1 (1 votes) · LW · GW

FWIW, I agree with wnoise, public funding of a library is a subsidy for the users of the library. If publicly funded libraries didn't exist, privately funded ones would, and those privately funded libraries would charge people money just as surely as a privately funded museum charges admissions. (And they'd probably have a "Second Tuesday of the month is free" special, much like a museum.)

Note: when I say something is a "subsidy" I am attempting to state a fact, not attempting to make a moral judgment. In the specific case of a public library, I think they're overdone and a bit of an applause light but ultimately a good use of community tax dollars. But if something costs tax dollars, and it does not benefit the people taxed in proportion to the amount of tax taken from them, then this is the thing that I am referring to when I use the label "subsidy". (The matter is, of course, complicated because "benefit" is much more nebulous than "direct benefit".)

Comment by chronos on Hayekian Prediction Markets? · 2010-02-17T07:18:08.245Z · score: 2 (2 votes) · LW · GW

I've never understood the "IHS subsidizes Wal-Mart" argument. It would only be a subsidy if WM got access to it on preferential terms to the rest of us. But they don't. Whatever use of the IHS they make, everyone else had the same opportunity. It's not like WM stupidly built up their whole infrastructure and then one day said, "Oh crap! This will be an utter failure unless there's a free interstate highway system! Quick! Government! Build it with other people's money!"

Of course. The subsidy is implicit in the system, rather than explicit. It'd be quite the rare Wal-mart executive who could even have the conscious thought even flit across his mind. But a subsidy doesn't cease to become a subsidy merely because no one is lobbying (either for it or against it). While lobbying and subsidy correlate, neither is the exclusive cause of the other.

But the fact remains that Wal-mart's business model relies on the fact that it can consume the highway system as a good, and do so in vast disproportion to the actual price paid for that good. If they had to pay in proportion to their actual consumption, they would not be profitable under their current model. (There may well be another model where they would be profitable, in a counterfactual world where highway use were metered. But, if counterfactual bets made coherent sense, I would bet money that Wal-mart's model in that world would include much greater use of rail.)

It is immaterial whether or not Wal-mart's executives consciously recognize the premises underlying their model: namely, that shipping via truck excludes the cost of the highway. It is immaterial whether or not Congressional representatives consciously recognized that funding the Interstate system without metering would invent the trucking industry. The fact is, Congress did fund the Interstate system, they did invent the trucking industry, and Wal-mart does rely on the trucking industry axiomatically.

This is one of those situations where evolutionary interdependencies and stare decisis (rather, the legislative counterpart thereof) conspire to create a lose-lose situation. Horn one: start charging for the highway system and thus destroy one industry, harm a bunch of others, and cause prices to spike for a decade or more. But maybe, twenty years from now, the infrastructure will be in place such that the economy is more efficient than it would have been otherwise. Horn two: continue paying for the highway system with federal taxes and thus penalize individuals for the benefit of a handful of large corporations, encourage people to own cars and avoid public transit, and destroy the viability of long-distance passenger rail even though it's far more cost- and energy-efficient in the long run. But at least nobody loses their job in the meantime.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T05:29:44.112Z · score: 3 (5 votes) · LW · GW

I suppose I should qualify that, as it's a bit unfair to Buffett.

Yes, Buffett is a professional investor and more expert than me at it, which counts for quite a bit. But he's also human, and humans don't do a very good job of anticipating economic activity beyond a horizon of a few years. Importantly, most humans have a laughably brief idea of what constitutes a "long term".

I'd estimate that Buffett's bet constitutes quite a few bits of evidence toward the profitability of Wal-mart over, say, a 2 year time horizon. But I was already leaning in that direction, so it doesn't move my posterior probability by very much. In contrast, I'd estimate that it provides a much smaller number of bits over a 10-year horizon: if I had to name a number, I'd say 2 bits. That's a nudge in Buffett's direction, but not a very big one.

Now, Wal-mart is not so foolish as to have played the derivatives shell games that exploded in the financial industry, nor do they have any substantial debt exposure. But I think a big source of risk, unconsidered in the standard analysis and probably unconsidered by Buffett, is their interdependence on China.

Sidebar:

Inflation triggers human biases: it causes people to miscalculate and believe they have more utilons merely because they have more money. (This is the essence of Keynesian stimulus: trick people into diverting their money from savings into spending. Regardless of whether you hold this is good or bad, it is what stimulus does.) Spending within an inflated economy is a complicated matter that I won't delve into, but international trade is where it gets interesting.

Imagine two countries, A and B, which are trade partners. A injects a stimulus. People in A start buying more goods, including imported goods from B, with their freshly-printed money. This creates a trade imbalance between A and B. When this happens the buyer (implicitly or explicitly) exchanges A's currency for B's. On the currency exchange markets, B's currency goes up (demanded) while A's goes down (supplied). Thus, in the absence of further intervention, the exchange rate will cause the price of B's goods to rise in A's currency until A can no longer afford them, putting the brakes on the trade imbalance.

However, the end of the trade imbalance can cause adjustment problems: when people made plans, they baked in assumptions that simply weren't true. People in A used to cheap \$GOOD are suddenly faced with rising prices. Manufacturers in B were used to steady output but now face a significant slowdown, perhaps turning that new factory from a brilliant investment into a frustrating white elephant.

Magic wand: more stimulus! Now B gets in on the act: B injects stimulus, tricking the people of B into spending instead of saving and filling the factories with busywork. Thanks to imports and foreign investments, money starts to flow out of their country, causing their currency to come back down from the stratosphere. And the cheap currency exchange rates make A look like a good investment now...

But in the end, what has this circle accomplished: both A and B have severely devalued their currencies in relation to any third-party country C, both have depleted their citizen's savings accounts, and both have huge government debts due to their respective stimuli. Oh, and each has lots of manufacturing capacity that goes to waste unless the other is actively digging a money pit.

End sidebar.

Note that what the U.S. and China have is not quite what I described above. China is inflating, but the U.S. is inflating faster, and the dollar-yuan currency peg means the exchange rate isn't closing the trade valve. Therefore the trade balance persists, with China the continuous exporter. This creates a huge pileup of U.S. dollars that no one is sure what to do with, and it also means there's little incentive for people in China to import from U.S. manufacturers. (The Chinese government owns most of the dollars: it printed yuan to buy them and thus fix the price. Therefore the dollars are not in private hands, therefore there is little investment flowing into the U.S. from China.)

China is painfully exposed: the situation is clearly unsustainable, it took herculean effort to keep it from exploding this time around, and it's going to explode in the not-so-distant future. In desperation to keep the Keynesian pump primed, the Chinese government has plowed enormous amounts of stimulus into their domestic economy: the government funded the construction of an entire city, Ordos, merely to boost GDP. (Spoiler: no one lives there, but prices are sky-high: real estate "always goes up" in China.) The next major economic crisis will probably (0.80) start with China, and will almost certainly (0.98+) bring about a crisis severe enough that it puts China into a recession.

From Wal-mart's perspective, stimulus in China is a mixed blessing: it provides a tiny relief valve through which piled up U.S. dollars can leave the country, and it also subsidizes Chinese manufacturers to lower prices, but it also creates inflationary pressure within China and thus causes labor and manufacturing prices (measured in yuan, not utilons) to rise dramatically. The whole thing a chaotic powder keg, and the blast is not directionally pointed away from Wal-mart.

In short, expect China starting today to follow a similar 30-year trajectory as the one laid out by Japan starting in 1980, complete with one or more "lost decades". (The situation is not exactly analogous, but strongly suggestive.)

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T03:25:24.626Z · score: 8 (8 votes) · LW · GW

Ah, I managed to come up with a more concrete example of where Wal-mart is leaving local information on the table.

Wal-mart has large displays of featured items, internally known as COMAC. (No, I don't know what it stands for, either.) These items come in as a bulk shipment, go on the shelf for two weeks, then come down: anything left over goes on the shelf or into the backstock bins. (A little birdie told me that they've eliminated the backstock bins for almost all departments now, so I'm not sure what they do with the leftovers now.) They form the big islands in the middle of the wider aisles ("action alleys"), as well as the endcaps of each regular aisle.

Once upon a time, department managers were encouraged to choose their COMAC. The company would send out an internal memo of what the recommendations were, but there would be several slots available for local discretion. Also, several of the slots would be decided at the regional or even district level. I seem to vaguely recall that, in the distant past, COMAC didn't necessarily arrive automatically, and department managers could refuse to run a Bentonville-requested product in favor of something else.

This resulted in much greater sales:

• Wal-mart could respond to a local competitor in the same city or even neighborhood. (My Wal-mart sold bananas for tens of cents per pound on Tuesdays for this reason.)
• Wal-mart could sell products that complimented the specials of another local business.
• Wal-mart could sell products that appealed to the clientele brought in by specific neighboring businesses. A Wal-mart next door to a PetCo is very different from a Wal-mart next door to a Lowe's.
• Wal-mart could sell seasonal products much more effectively: specials on juice drinks and popsicles timed precisely for the yearly local heatwave, or specials on road salt and windshield scrapers at just the right time of the year for the annual ice storm.

Then, The Party^W^WHome Office started taking more and more control away from the individual stores. First, centrally-planned COMAC was mandatory. Then, the internal competition among department managers for the highest-profit COMAC item was removed. Later, local options were taken away entirely. Finally, department managers were abolished entirely, demoted to hourly employees, and no human was in charge of analyzing the supply/demand logistics of the individual departments.

I'm sure that each of these individual decisions seemed rational to The Party^W^WHome Office. In fact, the decision to abolish COMAC choice probably contributed directly to slightly lower prices: by guaranteeing a specific size of bulk order to the manufacturer, the manufacturer would be willing to reduce the price a bit more. But most of this supply/demand data never made it to Bentonville: it existed only in the department managers' heads, and to a lesser extent the Support and Assistant Managers above them.

Worst of all, the data looked at in Bentonville to make decisions did not include a breakdown on profitability per COMAC item per store. It was aggregated at the level of profitability per COMAC item, and profitability per store, but these were separate considerations looked at by separate corporate bureaucrats: the former chose COMAC items nationally, working with the buyers to find out what surpluses the suppliers wanted to get rid of, while the latter scolded stores for not meeting yearly sales and profit targets.

The logistics software, which examines per-item sellthrough rates on a per-store and per-district basis, could have spotted this... if a human were looking at it, and if it weren't explicitly and intentionally disabled when an item goes on COMAC display. But the logistics software only computes running averages: it's quite stupid, not even close to Bayesian, and it generates no theories on geography, seasons, or holidays. (I understand that day-of-week correlations are explicitly programmed in as a belief, but no more than that.)

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T02:51:58.722Z · score: 0 (0 votes) · LW · GW

And on the timescale of 5 or even 10 years, he may even be right. Yay for him.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T02:50:52.934Z · score: 0 (2 votes) · LW · GW

Re: "telling stories"... When it comes to refusal to calculate, the Austrians seem closely akin to the people who claim that morality is "mysterious". They're looking at the mistakes of others (principally Keynes) and trying to reverse stupidity.

Which is a shame, because they do have a few insights here and there that strike me as being so correct they're painfully obvious in hindsight.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T02:44:48.153Z · score: 14 (14 votes) · LW · GW

As a separate sidebar regarding logistics, it's interesting to note that Wal-mart's shipping component is effectively being subsidized by the federal government, by way of the U.S. Interstate system.

While I'm not so much of a libertarian that I think the Interstate system was a bad idea, it is important to note that the Interstate system created an entire category of business (shipping via truck) that directly harmed two existing industries (shipping via boat, shipping via train) and stunted the growth of a third (shipping via plane). This would be all fine and dandy if shipping via truck were more efficient after considering all externalities. But firstly you have the environmental cost of burning gasoline/diesel, including a not-insubstantial impact on the global climate. And secondly you have the more direct economic cost of road wear.

Road wear is a funny thing. The rule of thumb is that road damage accumulates with the fourth power of the weight per axle. A single car with passengers has perhaps 2,000 pounds spread evenly over two axles, for a road wear of O(10^12) times a tiny constant per mile driven. A large truck, of the kind used by Wal-mart, has perhaps 50,000 pounds spread evenly over eight axles (18 wheels, minus two for the cab weight, divided by two to convert to axles). That's a road wear of O(10^15) times constant per mile driven, or 1,000 times greater than a passenger car.

On rural interstates, trucks form between 10% and 50% of traffic.

Thus almost all highway repair dollars are artificially propping up the trucking industry, creating phony profits for the trucking companies by siphoning tax dollars from citizens.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T02:29:05.255Z · score: 10 (10 votes) · LW · GW

I'm not full up on Hayek specifically, but the Austrian point in general is that regulations create barriers that shift the average size of a corporation, and the shift is almost exclusively upward because it takes a larger company to hire lawyers to figure out what the regulations mean. This creates a selective pressure for larger corporations, due to an artificially imposed economy of scale.

Specifically, what is it about Wal-mart that is so economically scalable? Wal-mart is not like Intel: they don't make a ten billion dollar investment, then earn profit at zero marginal cost. They don't manufacture anything, therefore they don't benefit from manufacturing economies of scale. What is it about Wal-mart in particular that does have marginal cost approaching zero?

There are two components to that.

The first answer is: Wal-mart profits from the logistics of shipping via truck across the continental United States. Wal-mart has very effectively parlayed that core business competency into the specific niche application of big-box Wal-mart stores. If Wal-mart were to voluntarily cleave itself into two pieces along the logistics line, Wal-mart Shipping could take on other shipping traffic besides just what Wal-mart Retail is selling. Thus, the business would scale to an even greater size and the marginal cost would fall closer to the pure gasoline cost. Logically, Wal-mart should spin off Wal-mart Shipping as a separate company to reap more profits. In practice, they do not, and they have good reasons why they do not.

The second answer is: Wal-mart profits from strong-arming its suppliers into selling at monopsony prices. Wal-mart's Home Office almost entirely consists of "buyers", a role that's half corporate bureaucrat and half used-car salesman. The buyers go to companies and ask them for deals. Larger companies, e.g. ConAgra or any of the other food oligopolies, might tell Wal-mart to piss off. But smaller players receive offers they can't refuse.

Google for "Wal-mart Vlasic" for a classic example. Wal-mart wanted a "statement item", something they could show off for marketing purposes as an iconic example of Wal-mart's cheap prices. They decided that they wanted to sell a gallon jar of pickles for \$3. In most households, a gallon jar of pickles is something that cannot be used up before it goes bad, but that's beside the point: if it's only \$3, that's the same price as a jar one quarter the size, and you'd have to be a fool to pay \$3 and "only" get a quart of pickles.

So, Wal-mart went to Vlasic and said, "We want to sell a gallon jar of pickles for \$3". Vlasic said, "Are you crazy? That's not even close to break-even!". Wal-mart said, "Oh, well if you're not interested, that's fine. But it would be a shame if we were to, you know, accidentally forget to order any pickles at all from you, even the profitable sizes." Vlasic said, "... you bastards." and conceded.

Thus, Wal-mart sold gallon jars of Vlasic pickles for \$3 for one summer, undoing Vlasic's previous positioning as a "premium" pickle brand that was worth a slightly higher cost in exchange for greater quality.

If Vlasic had been part of a bigger corporate conglomerate, i.e. not puny little Pinnacle Foods that owns a suite of also-ran brands, they would've had the power to say no. If Wal-mart had refused to carry one brand, Vlasic's hypothesized large parent company could've played Mutually Assured Destruction against them by refusing to sell their more popular brands at Wal-mart. But Pinnacle didn't have enough big brands in their brand portfolio, and thus was cowed into submission. (Note the creepy parallels to software patent law.)

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T01:45:10.336Z · score: 0 (2 votes) · LW · GW

Actually, I'm not by any stretch of the imagination convinced that Wal-mart is a highly profitable corporation by any long-term measure: that is, I'm quite convinced (probability greater than 0.99) that Wal-mart is sacrificing long-term growth and sustainability in favor of superficial short-term gains. Upper management is desperate to do anything to make the stock price budge, long term be damned. Eventually, this superficiality will expose itself as the house of cards it truly is.

The recent news regarding firing over 10,000 employees at Sam's Club is salt in the wound here. You don't cut employees if you plan to proactively expand your business, you cut employees to entrench yourself and react defensively to the market moving around you. You especially don't cut your marketers and salespeople (which is what those people giving out free samples are... err, were... functioning as). You might shift a slice of your labor budget from a less effective strategy to a more effective one, but you don't simply drop the slice entirely and pocket the change. That will destroy the business, even if it pads the golden parachutes on the way out.

And it's not like the job cuts are the only piece of evidence. Apparently, in the few years its been since I worked there, they've stopped hiring full-time employees entirely: now, all hires are part-time. Thus: second-rate health insurance (not that health insurance ought to come from employers in the first place), no retirement plan (not even the crappy Wal-mart stock they pawned off on full-time employees like me), and easier to fire people on a whim. But this has the hidden cost of needing to re-train people from scratch as the n00bs enter the revolving door, and it also sabotages the quality of labor by destroying any sense of loyalty to the company or enjoyment of the work environment. They then respond to falling labor quality by trying to wring even more out of the employees they have, creating a downward vicious spiral as the best employees walk out the door for greener pastures.

If I had any substantial amount of money invested in Wal-mart (i.e. beyond the pittance of stock accumulated in my 5 year employment, almost too trivial to bother with), I would be pulling it out now for saner investments.

Comment by chronos on Hayekian Prediction Markets? · 2010-02-16T00:59:07.718Z · score: 10 (12 votes) · LW · GW

Have you ever worked at Wal-mart? I have: I worked overnights as a shelf stocker for almost 5 years. The Soviet Union analogy is quite apt, although I'd peg it as closer to being a less gruesome version of the Great Leap Forward.

• We'd joke to new hires about the Sam Walton statue in the basement. (The humor came from the non-existence of the basement, and the unease underlying the humor came from the fact we had posters instead of statues only because Bentonville was too cheap to spend more than \$1.99 decorating the breakroom. In hushed tones, cracks about Chairman Mao were common between the better-read employees.)
• Bentonville issued ridiculous edicts that completely ignored the situation on the ground in individual stores.
• Edicts were replete with unrealistic quotas. For example, all employees were expected to stock 70 cases per hour, regardless of department: boxes full of tiny cosmetics bottles are treated identically to cardboard trays holding large blocks of Velveeta cheese (where the tray doubles as the customer display).
• Edicts were inconsistently enforced. One week, the edict is to run backstock. A week later, the edict is to spend more time "zoning" (arranging the product on the shelf for aesthetics). The week after that, the edict is zero overtime. And a week after that, the edict is case counts for everyone (timed speed runs). Then it loops back to a previous edict, and everyone is scolded for not following that edict all along.
• It was physically impossible to perform the job while fulfilling all edicts.
• Sometimes, the more sympathetic managers would commiserate with us about being ordered to enforce the edicts. These were usually the managers who quit, got fired, or voluntarily stepped down because of the stress. One manager disappeared for six months, rumor has it due to a stress breakdown.
• The less compassionate managers attempted to groom themselves for a position within The Party^W^WHome Office. Looking good to the higher bureaucrats was the only concern. They were the only ones who got promoted to Store Co-Manager and above.
• If a plan failed, it was because the store (managers, employees, or both) had failed to execute it.
• If blame for failure could be pinned on a specific person or team, they would be drummed out.
• "Drumming out" would consist of enforcing all standing edicts to the letter, then punishing them for insubordination when an edict was broken: "verbal" coaching, written coaching, decision-day, fired.
• A verbal coaching still involves written documentation, because Bentonville does not permit managers leeway, interpretation, or anything that can be swayed by compassion.
• A "d-day" would send you home for a paid day: you were required to write an essay explaining why you deserved to keep your job.

EDIT: Oh, and how could I forget: this was replete with visits from Party Officials^W^WRegional Managers. The visits were officially "secret", but of course the Store Manager would be tipped off by someone in the Regional Office. Thus, the next 24 hours would be spent artificially polishing the store (zoning, filling holes on the shelves with products that don't belong there) at the cost of doing the real work.

Comment by chronos on New Year's Predictions Thread · 2010-01-11T04:53:30.653Z · score: 1 (1 votes) · LW · GW

I wasn't even considering the possibility of static images in video games, because static images aren't generally considered to count in modern video games. The world doesn't want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters' faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).

Comment by chronos on New Year's Predictions Thread · 2010-01-01T21:33:32.777Z · score: 0 (0 votes) · LW · GW

The obvious answer would be "offline rendering".

Even if the non-interactivity of pre-rendered video weren't an issue, games as a category can't afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie -- typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can't afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.

Comment by chronos on The Moral Status of Independent Identical Copies · 2009-12-02T06:05:35.936Z · score: 0 (0 votes) · LW · GW

And, since I can't let that stand without tangling myself up in Yudkowsky's "Outlawing Anthropics" post, I'll present my conclusion on that as well:

To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked "Give \$1 to each copy in a green room, while taking \$3 from each copy in a red room"? (All must agree or something sufficiently horrible happens.)

The correct answer is "no". Because I have copies and I am interacting with them, it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms.

However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: "Would you bet the coin was heads? Win +\$1, or lose -\$3". My answer is now "yes": I am no longer interacting with copies, the expected utility is +\$0.60, so I take the bet.

The stuff about Boltzmann brains is a false dilemma. There's no point in valuing the Boltzmann brain scenario over any of the other "trapped in the Matrix" / "brain in a jar" scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn't tie itself into loops by supposing Boltzmann pocket universes embedded in a high-entropy background universe.

Comment by chronos on The Moral Status of Independent Identical Copies · 2009-12-02T04:51:39.696Z · score: 0 (0 votes) · LW · GW

Ruminating further, I think I've narrowed down the region where the fallacious step occurs.

Suppose there are 100 simulacra, and suppose for each simulacrum you flip a coin biased 9:1 in favor of heads. You choose one of two actions for each simulacrum, depending on whether the coin shows heads or tails, but the two actions have equal net utility for the simulacra so there are no moral conundra. Now, even though the combination of 90 heads and 10 tails is the most common, the permutations comprising it are nonetheless vastly outnumbered by all the remaining permutations. Suppose that after flipping 100 biased coins, the actual result is 85 heads and 15 tails.

What is the subjective probability? The coin flips are independent events, so the subjective probability of each coin flip must be 9:1 favoring heads. The fact that only 85 simulacra actually experienced heads is completely irrelevant.

Subjective probability arises from knowledge, so in practice none of the simulacra experience a subjective probability after a single coin flip. If the coin flip is repeated multiple times for all simulacra, then as each simulacrum experiences more coin flips while iterating through its state function, it will gradually converge on the objective probability of 90%. The first coin flip merely biases the experience of each simulacrum, determining the direction from which each will converge on the limit.

That said, take what I say with a grain of salt, because I seriously doubt this can be extended from the classical realm to cover quantum simulacra and the Born rule.

Comment by chronos on The Moral Status of Independent Identical Copies · 2009-12-01T06:54:29.994Z · score: 2 (2 votes) · LW · GW

Reading the post you linked to, it feels like some sort of fallacy is at work in the thought experiment as the results are tallied up.

Specifically: suppose we live in copies-matter world, and furthermore suppose we create a multiverse of 100 copies, 90 of which get the good outcome and 10 of which get the bad outcome (using the aforementioned biased quantum coin, which through sheer luck gives us an exact 90:10 split across 100 uncorrelated flips). Since copies matter, we can conclude it's a moral good to post hoc shut down 9 of the 10 bad-outcome copies and replace those simulacra with 9 duplicates of existing good-outcome copies. While we've done a moral wrong by discontinuing 9 bad-outcome copies, we do a greater moral right by creating 9 new good-outcome copies, and thus we paperclip-maximize our way toward greater net utility.

Moreover, still living in copies-matter world, it's a net win to shut down the final bad-outcome copy (i.e. "murder", for lack of a better term, the last of the bad-outcome copies) and replace that final copy with one more good-outcome copy, thus guaranteeing that the outcome for all copies is good with 100% odds. Even supposing the delta between the good outcome and the bad outcome was merely one speck of dust in the eye, and furthermore supposing that the final bad-outcome copy was content with the bad outcome and would have preferred to continue existing.

At this point, the overall multiverse outcome is identical to the quantum coin having double heads, so we might as well have not involved quantum pocket change in the first place. Instead, knowing that one outcome was better than the other, we should have just forced the known-good outcome on all copies in the first place. With that, copies-matter world and copies-don't-matter world are now reunified.

Returning to copies-don't-matter world (and our intuition that that's where we live), it feels like there's an almost-but-not-quite-obvious analogy with Shannon entropy and/or Kolmogorov-Chaitin complexity lurking just under the surface.

Comment by chronos on The continued misuse of the Prisoner's Dilemma · 2009-10-25T03:22:58.929Z · score: 6 (6 votes) · LW · GW

I'm reminded of a real-world similar example: World of Warcraft loot ninjas.

Background: when a good item drops in a dungeon, each group member is presented with two buttons, a die icon ("need") and a pile-of-gold icon ("greed"). If one or more people click "need", the server rolls a random 100-sided die for each player who clicked "need", and the player with the highest roll wins the item. If no one in the group clicked "need", then the server rolls dice for everyone in the group. Usually players enter dungeons in the hopes of obtaining items that directly improve their combat effectiveness, but many items can also be sold at the in-game auction house, sometimes for a substantial amount of gold, so that a character can still benefit indirectly even if the item itself has no immediate worth.

As you can imagine, "pick-up groups" (i.e. four random strangers you might never party with again) often suffer from loot ninjas: people who intentionally click on the "need" button to vastly improve their odds of obtaining items, even when the item holds no direct value for themselves but does hold direct value for another party member.

And, indeed, a common loot ninja strategy is to feign ignorance of the "need versus greed" loot roll system (which, to be fair, has legitimately confusing icons) and to use every other possible trick to elicit sympathy, such as feigning bad spelling and grammar, for as long as possible before being booted from the party and forcibly expelled from the dungeon.

Comment by chronos on Intuitive differences: when to agree to disagree · 2009-09-30T03:49:17.770Z · score: 0 (0 votes) · LW · GW

It's not a question of having different evidence: theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.

Disappointing, but true. If humans were perfect Bayesians the order of presentation wouldn't matter, but instead our biases kick in and skew the evidence as it arrives.

Edit: Ah, I see you already mentioned confirmation bias versus competent updating.