Posts

Comments

Comment by wnewman on Learned Blankness · 2011-04-19T15:54:40.691Z · LW · GW

I would add that it seems common for task difficulty distribution to be skewed in various idiosyncratic ways --- sufficiently common and sufficiently skewed that any uninformed generic intuition about the "noise" distribution is likely to be seriously wrong. E.g., in some fields there's important low-hanging fruit: the first few hours of training and practice might get you 10-30% of the practical benefit of the hundreds of hours of training and practice that would be required to have a comprehensive understanding. In other fields there are large clusters of skills that become easy to learn with once you learn some skill that is a shared prerequisite for the entire cluster.

Comment by wnewman on Philosophy: A Diseased Discipline · 2011-03-29T14:33:13.559Z · LW · GW

lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning everything, and so I don't think it's quite fair to expect philosophers to "understand" this (which I read in this context as synonymous as "take this for granted"). I'd prefer a formulation like "spectacularly bad at seriously addressing [or, perhaps, even properly understanding] the obvious hypothesis that their intuitions are generated by cognitive algorithms." It seems to me that the criticism rewritten in this form remains severe.

Comment by wnewman on How to Be Happy · 2011-03-18T16:57:59.470Z · LW · GW

I'd prefer that their answers about equal responsibility for parenting be consistent with their answers for equal right to be awarded disputed child custody. Holding either consistent position (mothers' parenting presence is essentially special in very important ways that can't generally be replaced by fathers, or mothers and fathers should be treated equally) seems less wrong than opportunistically switching between one position to justify extra parental rights and roles in divorce and the other position to justify equal parental responsibilities and roles in marriage.

(Of course, my simple dichotomy shatters into more possibilities if marriage is considered a custom contract defined by negotiation between the spouses. But marriage and family law in general seem very nearly a one-size-fits-all status defined by government, with only a small admixture of contract (pre-nups and such) having AGAIK almost no legal force regarding child care and custody. Thus I don't think the dichotomy is a gross distortion.)

Comment by wnewman on Science: Do It Yourself · 2011-02-17T19:45:09.076Z · LW · GW

Of course there could well be some exaggeration for dramatic effect there --- as David Friedman likes to say, one should be skeptical of any account which might survive on its literary or entertainment value alone. But it's not any sort of logical impossibility. In Dallas near UTD (which had a strong well-funded chess team which contributed some of the strong coffeehouse players) ca. 2002 I was able to play dozens of coffeehouse games against strangers and casual acquaintances. One can also play in tournaments and in open-to-all clubs. Perhaps one could even play grudge matches against people one dislikes. Also today one can play an enormous number of strangers online, and even in the 1970s people played postal chess.

Comment by wnewman on Science: Do It Yourself · 2011-02-17T18:56:16.848Z · LW · GW

I don't have enough data to compare such gaming outcomes very well, but I'll pass on something that I thought was funny and perhaps containing enough truth to be thought-provoking (from Aaron Brown's The Poker Face of Wall Street): "National bridge champion and hedge fund manager Josh Parker explained the nuances of serious high school games players to me. The chess player did well in school, had no friends, got 800s on his SATs, and did well at a top college. The poker and backgammon set (one crowd in the 1970s) did badly in school, had tons of friends, aced their SATs, and were stars at good colleges. The bridge players flunked out of high school, had no friends, aced their SATs, and went on to drop out of top colleges. In the 1980s, we all ended up trading options together."

Also, FWIW, Bill Gates and Warren Buffett are apparently in the bridge camp, though I dunno whether they played in high school.

Comment by wnewman on Science: Do It Yourself · 2011-02-17T13:56:07.429Z · LW · GW

Wei_Dai writes "I wonder if I'm missing something important by not playing chess."

I am a somewhat decent chess player[*] and a reasonable Go player (3 dan, barely, at last rated tournament a few years ago). If you're inclined to thinking about cognition itself, and about questions like the value of heuristics and approximations that only work sometimes, such games are great sources of examples. In some cases, the strong players have already thinking along those lines for more than a century, though using a different vocabulary. E.g., Go concepts like aji and thickness seem strongly related to Less Wrong discussions of the relative value of conjunctive and disjunctive plans.

There might also be some rationalist value in learning at the gut level that we're not on the primordial savannah any more by putting a thousand or so hours or more into at least one discipline where you can be utterly unmistakably crushed by someone who scores a zero on the usual societal/hindbrain tags for seriousness (like a bored 9 year old Ukrainian who obliterates you in the first round of the tournament on the way to finishing the tournament undefeated with a rating provisionally revised to 5 dan:-).

That said, I think you will probably get much more bang for your rationalist-wins-the-world buck from studying other things. In particular, I'd nominate (1) math along the usual engineering-ish main sequence (calculus, linear algebra, Fourier analysis, probability, statistics) and (2) computer programming. History and microeconomics-writ-large are also strong candidates. So it's not particularly worth studying chess or go beyond the point where you just find it fun for its own sake.

[*] highwater mark approximately 60 seconds before I abandoned my experiment with playing chess somewhat seriously: forced threefold repetition against a 2050ish-rated player who happened to be tournament director of the local chess club minitournament, who had told me earlier that I could stop recording when my clock fell below 5 minutes, and who ruled upon my 3-fold repetition that it didn't count as a draw because the game was not being recorded

Comment by wnewman on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields · 2011-02-16T20:54:15.296Z · LW · GW

I wasn't trying to be hard on that kind of collecting, though I was making a distinction. To me, choosing stamps (as opposed to, e.g., butterflies or historical artifacts) as a type specimen suggests that the collecting is largely driven by fashion or sentiment or some other inner or social motive, not because the objects are of interest for piecing together a vast disorderly puzzle found in the outer physical world. Inner and social motives are fine with me, though my motivation in such things tends to things other than collecting. (E.g., music and Go and Chess.)

Comment by wnewman on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields · 2011-02-16T19:26:24.854Z · LW · GW

You wrote "what I chose to do to resolve the matter was to deep dive into three often-raised skeptic arguments using my knowledge of physics as a starting point" and "deliberate misinformation campaigns in the grand tradition of tobacco [etc.]".

Less Wrong is not the place for a comprehensive argument about catastrophic AGW, but I'd like to make a general Less-Wrong-ish point about your analysis here. It is perceptive to notice that millions of dollars are spent on a shoddy PR effort by the other side. It is also perceptive to notice that many of the other side's most popular arguments aren't technically very strong. It's even actively helpful to debunk unreasonable popular arguments even if you only do it for those which are popular on the other side. However, remember that it's sadly common that regardless of their technical merits, big politicized controversies tend to grow big shoddy PR efforts associated with all factions. And even medium-sized controversies tend to attract some loud clueless supporters on both sides. Thus, it's not a very useful heuristic to consider significant PR spending, or the popularity of flaky arguments, as particularly useful evidence against the underlying factual position.

It may be "too much information [about AGW]" for Less Wrong, but I feel I should support my point in this particular controversy at least a little, so... E.g., look at the behavior of Pachauri himself in the "Glaciergate" glaciers-melting-by-2035 case. I can't read the guy's mind, and indeed find some of his behavior quite odd, so for all I know it is not "deliberate." But accidental or not, it looks rather like an episode in a misinformation campaign in the sorry tradition of big-money innumerate scare-environmentalism. Also, Judith Curry just wrote a blog post which mentions, among other things, the amount of money sloshing around in various AGW-PR-related organizations associated with anti-IPCC positions. For comparison, a rather angry critic I don't know much about (but one who should, at a minimum, be constrained by British libel law) ties the Glaciergate factoid to grants of $500K and $3M, and Greenpeace USA seems to have an annual budget of around $30M.

Comment by wnewman on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields · 2011-02-16T18:15:53.408Z · LW · GW

It has a germ of truth, but I think it's deeply misleading. In particular, it needs some kind of nod to the importance of relevance to everyday life. E.g., it would be more serious to claim "all science is either physics, or the systematizing side of some useful discipline like engineering, or stamp collecting." Pure stamp collecting endeavors have nothing to stop them from veering into the behavior stereotypically associated with modern art or the Sokal hoax. Fields like paleobotany or astronomy (or, indeed, physics itself in near-unobservable limits) can become arbitrarily pure stamp collecting when the in-group controls funding. More applied fields like genetics or immunology or synthetic chemistry or geology are messy and disordered compared to pure physics, and do resemble stamp collecting in that messiness. But true stamp collecting is not merely messy, but also arbitrarily driven by fashion. To the extent that a significant amount of the interest (and money) associated with an academic field flows from applications like agriculture and medicine and resource extraction, it tends not to dive so deeply into true free-floating arbitrariness of pure stamp collecting.

Comment by wnewman on Humans are not automatically strategic · 2010-09-09T15:46:38.382Z · LW · GW

It seems to me that once our ancestors' tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining "tools" broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind of turning point, but it might be hard to define that "point" with a standard deviation smaller than one hundred thousand years.

The takeoff to modern science and the industrial revolution is another turning point. Among other things related to this thread, it seems to me that this takeoff is when the heuristic of not thinking about grand strategy at all seriously and instead just doing what everyone has "always" done loses some of its value, because things start changing fast enough that most people's strategies can be expected to be seriously out of date. That turning point seems to me to have been driven by arrival at some combination of sufficient individual human capabilities, sufficient population density, and sufficient communications techniques (esp. paper and printing) which serve as force multipliers for population density. Again it's hard to define precisely, both in terms of exact date of reaching sufficiency and in terms of quite how much is sufficient; the Chinese ca. 1200AD and the societies around the Mediterranean ca. 1AD seem like they had enough that you wouldn't've needed enormous differences in contingent factors to've given the takeoff to them instead of to the Atlantic trading community ca, 1700.

Comment by wnewman on Humans are not automatically strategic · 2010-09-09T15:11:21.129Z · LW · GW

You write "Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?"

I don't know what argument Eliezer would've been using to reach that conclusion, but it's the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.

See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I'm pretty sure that the analysis behind that slide is in at least one of Moravec's books (where the slide, or something similar to it, appears as an illustration), but I don't know offhand which book.

The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn't be true, but there's also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don't know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.

Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)

I find this line of argument pretty convincing, so I think it's a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn't yet as low as one hundred thousand dollars per AI, it will be that low very soon.

Comment by wnewman on A Taxonomy of Bias: The Cognitive Miser · 2010-07-03T21:04:01.846Z · LW · GW

You write "We haven't evolved a tendency to use Type 2 because we mostly suck at it."

Maybe "type 2" is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it. It seems pretty common in successful search-based solutions to AI problems (like planning, pathing, or adversarial games) to use something analogous to the "type 1" vs. "type 2" split, moving a considerable amount of logic into "type 1"-like endpoint evaluation and/or heuristic hints for the search, then layering "type 2"-like search over that and treating the search as expensive. Even in problems that have been analyzed to death by hundreds or thousands of independent programmers (playing chess, e.g.) that design theme persists and enjoys competitive success. The competitive success of this theme across hundreds of independent designs doesn't eliminate the possibility that this is just a reflection of a blind spot in human's design ability, but it seems to me that that the success at least casts considerable doubt on the blind spot explanation. Perhaps we should take seriously the idea that the two layer theme and/or the relative expense of the two layers are properties of good solutions to this kind of problem, not merely idiosyncrasies of the human brain.

Comment by wnewman on Applying Behavioral Psychology on Myself · 2010-06-23T16:31:41.687Z · LW · GW

(To summarize my upcoming point in tl;dr form: if you don't find yourself rationalizing "maybe I'm onto the pattern" while your stomach rumbles as you contemplate the upside of getting gummi bears marginally more often, you might be tickling a different variety-seeking mechanism than you think. Nothing wrong with that, but if you want to get really good at optimizing that tickle, detailed knowledge about which mechanism it is might be helpful.)

From time to time when reading technical articles related to effective strategies for artificial agents faced with "n-armed bandit" problems, I am reminded of observed animal behavior patterns like PREE, and wonder how close the correspondence might be. N-armed bandits have been studied for a long time, and it seems like an obvious conjecture, but I have never seen much analysis of this. I never encounter such analysis spontaneously when people talk about a particular psych observation, even at ML-friendly sites like LW. And hunting for it with e.g. Google "partial reinforcement n-armed bandit" suggests that it must be a pretty obscure topic, because in the articles I find, the analysis I am looking for is swamped by different topics like reinforcement learning, and obscure topics like how a web designer trying to optimize humans' response to the website can usefully think of his website A/B testing as an n-armed bandit problem.

Can anyone recommend systematic attempts to explore how close this correspondence might be?

Of the usual pop psychology examples of overresponse to partial reinforcement, it looks to me as though gambling truly is narrowly tuned to the PREE phenomenon, and is working essentially by fooling an agent designed to solve a bandit problem. Other examples, however, tend to be sufficiently ambiguous or contradictory in various ways that I thinksomething unrelated could be going on. Humans can respond to variety in all sorts of positive ways. E.g., (dammit, I'm going blank on the name of) the classic confounding effect in industrial productivity studies where change itself, in either direction, can easily cause a positive effect independent of whether the new situation is objectively better in any useful sense.

Notice that successful gambling operations are contrived so that if you ever did discover even a small pattern (e.g., 53% success instead of 48%) it follows by perfectly correct analysis that the discovery would be enormously valuable. Under such extreme conditions, even a small nudge from a simple n-armed bandit heuristic (like a nagging intuition corresponding to a high prior probability that high variance implies a significant probability of discovering something that improves performance by a mere 10%) can get amplified to dramatically wrong behavior. Also notice that there is a strong observed pattern of compulsive gamblers fooling themselves into thinking they are finding small patterns. If gambling were a case of partial reinforcement directly tickling purely subconscious deep structures unrelated to n-armed banditry, then "I'm onto the pattern" might still sometimes be used to rationalize the irrational behavior, but it's not clear why it would be a strongly favored rationalization (compared to, e.g., "risk taking makes me glamorous").

Compare this to behavior patterns that aren't observed. E.g., I've never heard of anyone making cigarettes qualitatively more addictive by making them unpredictable, e.g., by selling mixed packs of placebo and nicotinic cigarettes. Could this be because there's no way for the robot to get rationally excited about the enormous upside of spotting a small pattern in such randomness? (And anything close to this which does succeed, e.g. toy prizes in cereal boxes, tends to be successful for only about as long as the robot's inputs from the world model let it be strongly uncertain about the upside..)

People do claim to spot partial-reinforcement-related phenomena in other behavior patterns which can't easily be explained as a bandit problem heuristic being tricked. E.g., people often accuse World of Warcraft and similar games of manipulating the intermittent reward mechanism to cause addictive behavior. WoW treasures are indeed randomized, and people do indeed become fascinated by the game, and I don't see how the robot could be getting excited about a huge upside of spotting the pattern. But WoW is in the entertainment industry. WoW developers could have saved a substantial amount of money by hiring far fewer artists to create far fewer kinds of trees and other decorations, but that it would be a bad idea. Hollywood could save even more money by aggressively reusing sets and actors and props and scripts between movies. Even in extremes like soap operas where many customers are looking for repetitive essentially-predictable escape, a successful entertainment product benefits from many kinds of variety. It seems to me that the positive importance of randomizing treasures needn't be explained by partial reinforcement any more than the positive importance, in a soap opera in which villains walk onto the stage in hundreds of different episodes, of avoiding a clear pattern of villains entering stage right every single time.

Comment by wnewman on Defeating Ugh Fields In Practice · 2010-06-21T13:03:19.492Z · LW · GW

In my experience, the rational actor model is generally more like a "model" or an "approximation" or sometimes an "emergent behavior" than an "assumption," and people who want us to criticize it as an "assumption" or "dogma" or "faith" or some such thing are seldom being objective.

(If you think this criticism is merely uninformed or based on a deep misunderstanding, then perhaps it would be rational to turn the phrase "the rationality assumption of neoclassical economics" in your opening paragraph into a hyperlink to some neoclassical authority you are engaging.)

There are various individual cases where it is quite justifiable to beat up neoclassical economists for trying to push rationality too far, either against the clear evidence in simple situations or beyond the messy evidence in complicated situations. As an example of the latter, my casual impression is that the running argument at Falkenblog against the Capital Asset Pricing Model could well be a valid and strong empirical critique. But there are also various individual cases where neoclassical economists can justifiably fire back with "[obvious rational reactions to] incentives matter [and are too often underappreciated]!" E.g., simple clean natural experiments like surprisingly large perverse consequences of a few thousand dollar tax incentive for babies born after a sharp cutoff date, or strong suggestive evidence in large messy cases like responses to price controls, high marginal tax rates, or various EU-style labor market policies.)

And it seems me that w.r.t. our rationality when we hold a discussion here about rationality in the real psychohistorical world, the elephant in the room is how commonly people's lively intellectual interest in pursuing the perverse consequences of some shiny new behavioral phenomenon in the real world turns out to be in fact an enthusiasm for privileging their preference for governmental institutions by judging market institutions (and evidence for them, and theoretical arguments for them, and observed utility of outcome from them) by a qualitatively harsher standard. The real world is dominated by mixed economies, so the implications of individual irrationality for existing governmental institutions (like democracy and hierarchical technocratic regulatory agencies) have at least as much practical importance as the implications for some idealized model of pure free markets. And neoclassical economists have some fairly good reasons (both theoretical and empirical) to expect key actors in market institutions to display more of some relevant kinds of rationality than (e.g.) random undergrads display in psych experiments, while AFAICS political scientists seldom have comparably good reasons to expect it in institutions in general.

I commend this post for picking a telling example of behavioral anomalies which show a strong impact in the real world (as opposed to, e.g., in bored undergraduates working off a psych class requirement by being lab rats). But I see nothing essentially market-specific about this anomaly. Thus, it is obvious why it is interesting regarding self-help w.r.t. ugh fields, and it is not obvious why when considering its application to the broader world, we should focus on its importance for economics-writ-very-small as opposed to its importance for existing mixed economies. And as above, unless you link to someone significant who actually makes your "rationality assumption" so broadly that this experiment would falsify it, I don't think you've actually engaged your enemy, merely a weak caricature.

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-16T21:41:09.067Z · LW · GW

See also the conversational thread which runs through http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb3 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb8 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kba

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-16T21:07:09.523Z · LW · GW

Perhaps the root of our disagreement is that you think (?) that the GR field equations constrain their solutions to conform to Mach's principle, while I think they admit many solutions which don't conform to Mach's principle, and that furthermore that Vladimir_M is probably correct in his sketch of a family of non-Mach-principle solutions.

EY's article seems pretty clear about claiming not that Mach's principle follows deductively from the equations of GR, but that there's a sufficiently natural fit that we might make an inductive leap from observed regularity in simple cases to an expected identical regularity in all cases. In particular EY writes "I do not think this has been verified exactly, in terms of how much matter is out there, what kind of gravitational wave it would generate by rotating around us, et cetera. Einstein did verify that a shell of matter, spinning around a central point, ought to generate a gravitational equivalent of the Coriolis force that would e.g. cause a pendulum to precess." I think EY is probably correct that this hasn't been verified exactly --- more on that below. I also note that from the numbers given in Gravitation, if you hope to fake up a reasonably fast rotating frame by surrounding the experimenter with a rotating shell too arbitrarily distant to notice, you may need a very steep quantity discount at your nonlocal Black-Holes-R-Us (Free Installation At Any Velocity), and more generally that apparently solutions which locally hide GR's preferred rotational frame seem to be associated with very extreme boundary conditions.

You write "under Mach's principle (the version that says only relative motion is meaningful, and which GR agrees with), these consequences of acceleration you describe only exist because of the frame against which to describe the acceleration, which is formed by the (relatively!) non-accelerating the rest of the universe." I think it would be more precise to say not "which GR agrees with" but "which some solutions to the GR field equations agree with." Similarly, if I were pushing a Newman principle which requires that the number of particles in the universe be divisible by 2, I would not say "which GR agrees with" if there were any chance that this might be interpreted as a claim that "the equations of GR require an even number of particles." Solutions to the GR field equations can be consistent with Mach's principle, but I'm pretty sure that they don't need to be consistent with it. The old Misner et al. Gravitation text remarks on how a point of agreement with Mach's principle "is a characteristic feature of the Friedman model and other simple models of a closed universe." So it seems pretty clear that as of 1971, there was no known requirement that every possible solution must be consistent with Mach's principle. And (Bayes FTW!) if no such requirement was known in 1971, but such a requirement was rigorously proved later, then it's very strange that no one has brought up in this discussion the name of the mathematical physicist(s) who is justly famous for the proof.

(I'm unlikely to look at The End of Time 'til the next time I'm at UTDallas library, i.e., a week or so.)

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-16T15:52:14.578Z · LW · GW

You write "In GR, the very question is nonsense. [0] The universe does not have a position, just relative positions of objects. [1] The universe does not have a velocity, just relative velocities of various objects. [2] The universe does not have an acceleration, just relative accelerations of various objects." This passage incorrectly appeals to GR to lump together three statements that GR doesn't lump together.

See http://en.wikipedia.org/wiki/Inertial_frames_of_reference and note the distinction there between "constant, uniform motion" and various aspects of acceleratedness. Your [0] and [1] describe changes within an inertial frame of reference, while [2] gets you to a non-inertial frame. Not coincidentally, your [0] and [1] are predicted by GR and are consistent with centuries of careful experiment, while [2] is not predicted by GR and is inconsistent with everyday observation with Mark I eyeballs. (With modern vehicles it's common to experience enough acceleration in the vicinity of some low-friction system to notice that acceleration causes conservation of momentum to break down in ways that a constant displacement and/or uniform motion doesn't.)

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-16T14:14:10.668Z · LW · GW

"But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests."

"no need"? Sadly, it's hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere where an adversary has strong influence over the input seen by quicksort, you can create a vulnerability to a denial-of-service attack. This is easy to understand with proofs, not so easy either to detect or to quantify with random sampling. Also, the pathological input has low Kolmogorov complexity, so the universe might well happen give it to your system accidentally even in situations where your aren't faced by an actual malicious intelligent "adversary."

Also sadly, we don't seem to have very good standard technology for performance proofs. Some years ago I made a horrendous mistake in an algorithm preprint, and later came up with a revised algorithm. I also spent more than a full-time week studying and implementing a published class of algorithms and coming to the conclusion that I had wasted my time because the published claimed performance is provably incorrect. Off and on since then I've put some time into looking at automated proof systems and the practicalities of proving asymptotic performance bounds. The original poster mentioned ACL2; I've looked mostly at HOL Light (for ordinary math proofs) and to a lesser extent Coq (for program/algorithm proofs). The state of the art for program/algorithm proofs doesn't seem terribly encouraging. Maybe someday it will be a routine master's thesis to, e.g., gloss Okasaki's Purely Functional Data Structures with corresponding performance proofs, but we don't seem to be quite there yet.

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-15T13:37:36.858Z · LW · GW

(two points, one about your invocation of frame-dragging upstream, one elaborating on prase's question...)

point 1: I've never studied the kinds of tensor math that I'd need to use the usual relativistic equations; I only know the special relativistic equations and the symmetry considerations which constrain the general relativistic equations. But it seems to me that special relativity plus symmetry suffice to justify my claim that any reasonable mechanical apparatus you can build for reasonable-sized planets in your example will be practically indistinguishable from Newtonian predictions.

It also seems to me that your cited reference to wikipedia "frame-dragging" supports my claim. E.g., I quote: "Lense and Thirring predicted that the rotation of an object would alter space and time, dragging a nearby object out of position compared with the predictions of Newtonian physics. The predicted effect is small --- about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive."

You seem to be invoking the authority of standard GR to justify an informal paraphrase of version of Mach's principle (which has its own wikipedia article). I don't know GR well enough to be absolutely sure, but I'm about 90% sure that by doing so you misrepresent GR as badly as one misrepresents thermodynamics by invoking its authority to justify the informal entropy/order/whatever paraphrases in Rifkin's Entropy or in various creationists' arguments of the form "evolution is impossible because the second law of thermo prevents order from increase spontaneously."

point 2: I'll elaborate on prase's "What do you expect as a non-negligible difference made by (non-)existence of distant objects?" IIRC there was an old (monastic?) thought experiment critique of Aristotelian "heavy bodies fall faster:" what happens when you attach an exceedingly thin thread between two cannonballs before dropping them? Similarly, what happens to rotational physics of two bodies alone in the universe when you add a single neutrino very far away? Does the tiny perturbation cause the two cannonballs discontinously to have doubly-heavy-object falling dynamics, or the rotation of the system to discontinously become detectable?

Comment by wnewman on Open Thread June 2010, Part 3 · 2010-06-14T15:02:59.533Z · LW · GW

Relativity says that as motion becomes very much slower than the speed of light, behavior becomes very similar to Newton's laws. Everyday materials (and planetary systems) and energies give rise to motions very very much slower than the speed of light, so it tends to be very very difficult to tell the difference. For a mechanical experimental design that can accurately described in a nontechnical blog post and that you could reasonably imagine building for yourself (e.g., a Foucault-style pendulum), the relativistic predictions are very likely to be indistinguishable from Newton's predictions.

(This is very much like the "Bohr correspondence principle" in QM, but AFAIK this relativistic correspondence principle doesn't have a special name. It's just obvious from Einstein's equations, and those equations have been known for as long as ordinary scientists have been thinking about (speed-of-light, as opposed to Galilean) relativity.)

Examples of "see, relativity isn't purely academic" tend to involve motion near the speed of light (e.g., in particle accelerators, cosmic rays, or inner-sphere electrons in heavy atoms), superextreme conditions plus sensitive instruments (e.g., timing neutron stars or black holes in close orbit around each other), or extreme conditions plus supersensitive instruments (e.g., timing GPS satellites, or measuring subtle splittings in atomic spectroscopy).

Comment by wnewman on Less Wrong Book Club and Study Group · 2010-06-13T16:04:13.795Z · LW · GW

I live in Plano (i.e., for y'all far away, a bit north of Dallas). I might be interested in participating in a meatspace study group arrangement of some sort. I've never done something like this outside of university classes, dunno how it'd work out, except to guess that it probably depends strongly on individual personalities and schedules and such.

I've studied parts of the Jaynes book in the past. Recently I've been studying more specialized machine learning techniques, like support vector machines, but it seems clear that more time spent studying the more general and fundamental stuff would be time well spent in understanding specialized techniques, and the Jaynes book looks like a good candidate for such study.