Posts

LINK: Study demonstrates politically motivated innumeracy 2013-09-04T18:14:21.891Z
LINK: Infinity, probability and disagreement 2013-03-05T04:36:10.320Z
[Link] Offense 101 2012-10-24T21:28:47.231Z
Rationality Quotes August 2012 2012-08-03T15:33:53.905Z

Comments

Comment by Alejandro1 on Stupid Questions December 2016 · 2016-12-28T17:07:37.934Z · LW · GW

The question is analogous to the Grim Reaper Paradox, described by Chalmers here:

A slightly better example of prima facie without ideal positive conceivability may be the Grim Reaper paradox (Benardete 1964; Hawthorne 2000). There are countably many grim reapers, one for every positive integer. Grim reaper 1 is disposed to kill you with a scythe at 1pm, if and only if you are still alive then (otherwise his scythe remains immobile throughout), taking 30 minutes about it. Grim reaper 2 is disposed to kill you with a scythe at 12:30 pm, if and only if you are still alive then, taking 15 minutes about it. Grim reaper 3 is disposed to kill you with a scythe at 12:15 pm, and so on. You are still alive just before 12pm, you can only die through the motion of a grim reaper's scythe, and once dead you stay dead. On the face of it, this situation seems conceivable — each reaper seems conceivable individually and intrinsically, and it seems reasonable to combine distinct individuals with distinct intrinsic properties into one situation. But a little reflection reveals that the situation as described is contradictory. I cannot survive to any moment past 12pm (a grim reaper would get me first), but I cannot be killed (for grim reaper n to kill me, I must have survived grim reaper n+1, which is impossible). So the description D of the situation is prima facie positively conceivable but not ideally positively conceivable.

Comment by Alejandro1 on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-26T21:37:33.779Z · LW · GW

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

Comment by Alejandro1 on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-03T03:02:05.502Z · LW · GW

That is from xkcd.

Comment by Alejandro1 on There is no such thing as strength: a parody · 2015-07-08T14:41:56.455Z · LW · GW

I guess one is Eugine/Azathoth/VoiceOfRa

I had suddenly the same suspicion about VoR today, in a spontaneous way; has there been previous discussion of this conjecture that I missed?

Comment by Alejandro1 on There is no such thing as strength: a parody · 2015-07-07T09:11:13.642Z · LW · GW

It is true that normally, taking people at their word is charitable. But if someone says that a concept is meaningless (when discussing it in a theoretical fashion), and then proceeds to use informally in ordinary conversation (as I conjectured that most people do with race and intelligence) then we cannot take them literally at their word. I think that something like my interpretation is the most charitable in this case.

Comment by Alejandro1 on There is no such thing as strength: a parody · 2015-07-06T21:12:33.038Z · LW · GW

When people say things like "intelligence doesn't exist" or "race doesn't exist", charitably, they don't mean that the folk concepts of "intelligence" or "race" are utterly meaningless. I'd bet they still use the words, or synonyms for it, in informal contexts, analogously to how we use informally "strength". (E.g. "He's very smart"; "They are an interrracial couple"; "She's stronger than she looks"). What they object to is to treating them as a scientifically precise concepts that denote intrinsic, context-independent characteristics. I agree with gjm that your parody arguments against "strength" seem at least superficially plausible if read in the same way than the opponents of "race" and "intelligence" intend theirs.

Comment by Alejandro1 on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-14T18:23:44.080Z · LW · GW

Called it on the power the Dark Lord knew not.

Comment by Alejandro1 on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-05T16:45:32.528Z · LW · GW

I think that the trial and error model is implausible; in which "time" are these trials and iterations occurring? The global determination of the whole universe seems much simpler.

I don't think it necessarily conflicts with free will, when free will is understood in a compatibilist way (which is how EY and most LWers understand it). If we agree that one can have free will in a completely deterministic universe with ordinary past-to-future causal chains, then why can't one have it in a universe where some of the chains run future-to-past?

Comment by Alejandro1 on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-03-02T00:07:24.804Z · LW · GW

He actually said it beforehand in LW as well. Link.

Comment by Alejandro1 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109 · 2015-02-23T21:50:57.766Z · LW · GW

In all details, certainly not; Dumbledore's CEV might well include reuniting with his family, which won't be a part of others' CEV.

In broad things like ethics and politics, it is hoped that different people's CEVs aren't too far apart (thanks to human values originating in our distant evolutionary history, which is shared by all present-day humans) but there is no proof, and many would dispute it. At least that is my understanding.

Comment by Alejandro1 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-21T02:58:38.467Z · LW · GW

"I ask my first question," Harry said. "What really happened on the night of October 31st, 1981?" Why was that night different from all other nights... "I would like the entire story, please."

Comment by Alejandro1 on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-16T16:50:09.565Z · LW · GW

I've had an experience a couple of times that feels like being stuck in a loop of circular preferences.

It goes like this. Say I have set myself the goal of doing some work before lunch. Noon arrives, and I haven't done any work--let's say I'm reading blogs instead. I start feeling hungry. I have an impulse to close the blogs and go get some lunch. Then I think I don't want to "concede defeat" and I better do at least some work before lunch, to feel better about myself. I open briefly my work, and then… close it and reopen the blogs. The cycle restarts. So Lunch > Blogs, Work > Lunch, and Blogs > Work.

(It usually ends with me doing some trivial amount of work--writing a few lines for a paper, sending an email, etc--and then going for lunch with an only half-guilty conscience.)

Has anybody else experienced circular-like preferences, whether procrastination-related like these or in a different context?

Comment by Alejandro1 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:53:06.113Z · LW · GW

I understood it to be implied that the message was actually set in advance to mislead Harry into believing time travel was involved.

Comment by Alejandro1 on Stupid Questions February 2015 · 2015-02-03T16:59:21.921Z · LW · GW

I'd be curious where the factor of 2 comes from in the Newtonian approximation.

I can take a stab at explaining this. Both the Poisson equation and the Einstein equation have the general form

  • 2nd order differential operator acting on some quantity F = Constant * Matter source

In the Newtonian case, F is the gravitational potential. In the Einstein case, it is the spacetime metric. This is a quantity with a simple, natural, purely "mathematical" definition that you cannot play with and change redifining constants; it measures the distance between events on a four-dimensional curved spacetime. "Matter source" in the Poisson equation stands for mass density, and in the Einstein equation it stands for a more complicated entitity that reduces to exactly mass density in the limit when Newtonian physics holds. So the ratio of the constants in each equation is determined by the ratio of how "spacetime metric" and "gravitational potential" are related in the Newtonian limit of GR.

In Newtonian physics, the gravitational potential is that whose first derivatives give the acceleration of a test particle:

  • gradient of potential = acceleration of particles

This is considered a phyiscs law combining both Newton's law of gravity and Newton's second law of motion. In GR, the spacetime metric also has the (purely mathematical) property that (in the limit where velocities are much smaller than the speed of light, and departures from flat space are small) its gradient is proportional, with a factor 2, to the acceleration of geodesic (minimum length) trajectories in spacetime:

  • gradient of metric = 2 acceleration of geodesics

So if we make the physical assumption that test particles in a gravitational field follow geodesics, then we can recover Newtonian gravity from GR. (The whole reason why this is possible is the equivalence principle, the observation that all forms of matter respond to gravity in the same way.) Since small perturbations to a flat metric have to be identified with twice the Newtonian potential, this is where the extra 2 in the Einstein equation comes from.

Comment by Alejandro1 on Stupid Questions February 2015 · 2015-02-03T02:48:30.131Z · LW · GW

The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.

Saying that this is what the formula intrinsically does, amounts to saying that field lines are more fundamental/"real" than action-at-distance forces on point particles. But in the context of purely Newtonian gravity, both formulations are in fact completely equivalent. (And if you appeal to relativity to justify considering fields more fundamental, then why not better go for simplifying Einstein's equation and including 8π in G?)

Comment by Alejandro1 on Stupid Questions February 2015 · 2015-02-02T18:12:22.099Z · LW · GW

The current definition of the gravitational constant maximizes the simplicity of Newton's law F = Gmm'/r^2. Adding a 4π to its definition would maximize the simplicity of the Poisson equation that Metus wrote. Adding instead 8π, on the other hand, would maximize the simplicity of Einstein's field equations. No matter what you do, some equation will look a bit more complicated.

Comment by Alejandro1 on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" · 2015-01-28T20:47:51.275Z · LW · GW

Here the question is raised again to Gates in a Reddit AMA. He answers:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Edit: Ninja'd by Kawoomba.

Comment by Alejandro1 on What topics are appropriate for LessWrong? · 2015-01-13T07:22:33.628Z · LW · GW

My understanding of the use of "mindkilled" is that people who can be so described are incapable of discussing the relevant issue dispassionately, acquiring an us-vs-them tribal mentality and seeing arguments just as soldiers for their side. I really don't think that this applies to the topic of abortion on LW, which can be discussed dispassionately (much more so than in other places, at least). This is quite compatible with the possibility that the LW consensus is biased and wrong, which is what you are suggesting.

Comment by Alejandro1 on What topics are appropriate for LessWrong? · 2015-01-12T22:45:01.915Z · LW · GW

Abortion is a strongly mindkilling topic for society in general, but it is not one for Less Wrong. According to Yvain's survey data on a 5-point scale the responses on abortion average 4.38 + 1.032, which indicates a rather strong consensus accepting it. As a contrast, the results for Social Justice are 3.15 + 1.385. This matches my intuitive sense that discussions of social justice on LW are much more mindkilling than discussions of abortion.

Comment by Alejandro1 on Open thread, Dec. 29, 2014 - Jan 04, 2015 · 2015-01-02T14:36:23.028Z · LW · GW

I answered "not at all", even though I was for some years very shy, anxious and fearful about asking girls out, because I never felt anything like the specific fears both Scotts wrote about, of being labelled a creep, sexist, gross, objectifier, etc. It was just "ordinary" shyness and social awkwardness, not related at all to the tangled issues about feminism and gender relations that the current discussion centers on. I interepreted the question as being interested specifically in the intersection of shyness with these issues, otherwise I might have answered "sort of".

Comment by Alejandro1 on PSA: Eugine_Nier evading ban? · 2014-12-08T00:18:28.804Z · LW · GW

You are the fourth or fifth person who has reached the same suspicion, as far I as know, independently. Which of course is moderate additional Bayesian evidence for its truth (at the very least, it means you are seeing a objective pattern even if it turns out to be coincidental, instead of being paranoid or deluded)

Comment by Alejandro1 on Integral versus differential ethics · 2014-12-02T16:31:30.201Z · LW · GW

I think that violates the spirit of the thought experiment. The point of the dust speck is that it is a fleeting, momentary discomfort with no consequences beyond itself. So if you multiply the choice by a billion, I would say that the billion dust specks should aggregate in a way they don't pile up and "completely shred one person"--e.g., each person gets one dust speck per week. This doesn't help solving the dilemma, at least for me.

Comment by Alejandro1 on Integral versus differential ethics · 2014-12-01T20:44:59.019Z · LW · GW

The "clearly" is not at all clear to me, could you explain?

Comment by Alejandro1 on Integral versus differential ethics · 2014-12-01T18:51:18.282Z · LW · GW

Another dilemma where the same dichotomy applies is torture vs. dust specks. One might reason that torturing one person 50 years is better than torturing 100 people infinitesimally less painfully for 50 years minus one second, and that this is better than torturing 10,000 people very slightly less painfully for 50 years minus two seconds……. and at the end of this process accept the unintuituive conclusion that torturing someone 50 years is better than having a huge number of people suffer a tiny pain for a second (differential thinking). Or one might refuse to accept the conclusion and decide that one of these apparently unproblematic differential comparisons is in fact wrong (integral thinking).

Comment by Alejandro1 on Musk on AGI Timeframes · 2014-11-17T02:56:04.615Z · LW · GW

The exposure of the general public to the concept of AI risk probably increased exponentially a few days ago, when Stephen Colbert mentioned Musk's warnings and satirized them. (Unrelatedly but also of potential interest to some LWers, Terry Tao was the guest of the evening).

Comment by Alejandro1 on Open thread, Nov. 10 - Nov. 16, 2014 · 2014-11-10T22:39:47.760Z · LW · GW

You could have a question about the scientific consensus on whether abortion can cause breast cancer (to catch biased pro-lifers). For bias on the other side, perhaps there is some human characteristic the fetus develops earlier than the average uninformed pro-choicer would guess? There seems to be no consensus on fetus pain, but maybe some uncontroversial-yet-surprising fact about nervous system development? I couldn't find anything too surprising on a quick Wiki read, but maybe there is something.

Comment by Alejandro1 on 2014 Less Wrong Census/Survey · 2014-11-06T05:06:49.112Z · LW · GW

Took the survey. As usual, immense props to Yvain for the dedication and work he puts into this.

Comment by Alejandro1 on What are the most common and important trade-offs that decision makers face? · 2014-11-03T05:29:46.908Z · LW · GW

Type I vs. Type II errors?

Comment by Alejandro1 on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-11T18:07:56.364Z · LW · GW

If Alice was born in January and Bob was born in December, she will be 11 months older than him when they start going to school (and their classmates will be in average 5.5 months younger than her and 5.5 months older than him), which I hear can make a difference.

I think this way of sorting classes by calendar year of birth might also be six months shifted in different hemispheres (or perhaps vary with country in more capricious ways). IIRC, in Argentina my classes had people born from one July to the following June, not from one January to the following December.

Comment by Alejandro1 on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-11T14:10:29.857Z · LW · GW

Is the "Birth Month" bonus question just to sort people arbitrarily into groups to do statistics, or to find correlations between birth month and other traits? If the latter, since the causal mechanism is almost certainly seasonal weather, the question should ask directly for seasonal weather at birth to avoid South Hemisphere confounders.

Comment by Alejandro1 on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-11T13:00:20.063Z · LW · GW

The question about "Country" should clarify whether you are asking about nationality or residence.

Comment by Alejandro1 on Open thread, Sept. 29 - Oct.5, 2014 · 2014-09-29T13:39:48.122Z · LW · GW

Philosopher Richard Chapell gives a positive review of Superintelligence.

An interesting point made by Brandon in the comments (the following quote combines two different comments):

I think there's a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one's particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom discusses, on both the risk and the prevention side, have lower-level, less efficient efficient analogues in institution-building.

A lot of the problems -- perverse instantiation and principal agent problems, for instance -- are standard issues in law and constitutional theory, and a lot of constitutional theory is concerned with addressing them. In checks and balances, for instance, we are 'stunting' and 'tripwiring' different institutions to make them work less efficiently in matters where we foresee serious risks. Enumeration of powers is an attempt to control a government by direct specification, and political theories going back to Plato that insist on the importance of education are using domesticity and indirect normativity. (Plato's actually very interesting in this respect, because the whole point of Plato's Republic is that the constitution of the city is deliberately set up to mirror the constitution of a human person, so in a sense Plato's republic functions like a weird artificial intelligence.)

The major differences arise, I think, from two sources: (1) With almost all institutions, we are dealing with less-than-existential risks. If government fails, that's bad, but it's short of wiping out all of humanity. (2) The artificial character of an AI introduces some quirks -- e.g., there are fewer complications in setting out to hardwire AIs with various things than trying to do it with human beings and institutions. But both of these mean that a lot of Bostrom's work on this point can be seen as looking at the kind of problems and strategies involved in institutions, in a sort of pure case where usual limits don't apply.

I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?

Comment by Alejandro1 on CEV-tropes · 2014-09-22T20:05:26.104Z · LW · GW

0) CEV doesn't exist even for a single individual, because human preferences are too unstable and contingent on random factors for the extrapolation process to give a definite answer.

Comment by Alejandro1 on Open thread, September 15-21, 2014 · 2014-09-18T15:45:37.875Z · LW · GW

The American Conservative is definitely socially conservative and, if not exactly fiscally liberal, at least much more sympathetic to economic redistribution than mainstream conservatism. But it is more composed of opinion pieces than of news reports, so I don't know if it works for way you want.

As others suggested, Vox could be a good choice for a left-leaning news source. It has decent summaries of "everything you need to know about X" (where X = many current news stories).

Comment by Alejandro1 on What are your contrarian views? · 2014-09-17T13:52:40.990Z · LW · GW

But "Would you pay a penny to avoid scenario X?" in no way means "Would you sacrifice a utilon to avoid scenario X?" (the latter is meaningless, since utilons are abstractions subject to arbitrary rescaling). The meaningful rephrasing of the penny question in terms of utilons is "Ceteris paribus, would you get more utilons if X happens, or if you lose a penny and X doesn't happen?" (which is just roundabout way of asking which you prefer). And this is unobjectionable as a way of testing whether you have really a preference and getting a vague handle on how strong it is.

I would prefer if people avoided the word "utilon" altogether (and also "utility" outside of formal decision theory contexts) because there is an inevitable tendency to reify these terms and start using them in meaningless ways. But again, nothing special about money here.

Comment by Alejandro1 on What are your contrarian views? · 2014-09-17T11:20:57.707Z · LW · GW

Right; assuming (falsely of course) that humans have coherent preferences satisfying the VNM axioms, what can be measured in utilons are not "amount of dollars" in the abstract, but "amount of dollars obtained in such-and-such way in such-and-such situation". But I wouldn't call this "not being meaningfully comparable". And there is nothing special about dollars here, any other object, event or experience is subject to the same.

Comment by Alejandro1 on What are your contrarian views? · 2014-09-17T11:14:41.144Z · LW · GW

Utilons do not exist. They are abstractions defined out of idealized, coherent preferences. To the extent that they are meaningful, though, their whole point is that anything one might have a preference over can be quantified in utilons--including dollars.

Comment by Alejandro1 on What are your contrarian views? · 2014-09-15T19:41:56.685Z · LW · GW

If the rotating pie is a pie that when nonrotating had the same radius as the other one, when it rotates it has a slightly larger radius (and circumference) because of centrifugal forces. This effect completely dominates over any relativistic one.

Comment by Alejandro1 on What are your contrarian views? · 2014-09-15T17:40:34.740Z · LW · GW

Why is it inconsistent?

Comment by Alejandro1 on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T17:25:42.221Z · LW · GW

I am really torn between wanting to downvote this as having no place in LW and going against the politics-talk-taboo, and wanting to upvote it for being a clear, fair and to the point summary of ideological differences I find fascinating.

Comment by Alejandro1 on Rationality Quotes September 2014 · 2014-09-01T19:10:29.303Z · LW · GW

I’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here)." It’s such an interesting statement, because it has three layers of meaning.

The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meaning is the opposite: I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to such obviously wrong conclusions. And yet, the third, true meaning is actually more like the first: I lack the empathy, moral imagination or analytical skills to attempt even a basic understanding of the people who disagree with me.

In short, “I’m stupid.” Something that few people would ever post so starkly on their Facebook feeds.

--Megan McArdle

Comment by Alejandro1 on Why we should err in both directions · 2014-08-21T15:17:56.521Z · LW · GW

The example of the three locks brings to mind another possible failure of this principle: that it can be exploited by deliberately giving us additional choices. For example, perhaps in this example the cheap lock is perfectly adequate for our needs, but seeing the existence of an expensive lock makes us believe that the regular one is the one that has equal chance of erring in both directions. I believe I read (in LW? or in Marginal Revolution?) that restaurant menus and sales catalogs often include some outrageously priced items to induce customers to buy the second-tier priced items, which look reasonable in comparison, but are the ones where most profit is made. Attempts to shift the Overton Window in politics rely on the same principle.

Comment by Alejandro1 on The rational way to name rivers · 2014-08-06T16:29:54.405Z · LW · GW

On the other hand, it is kind of awesome that people with no knowledge of Esperanto but knowledge of two or three European languages can immediately understand everything you say--as I just did.

Comment by Alejandro1 on Open thread, August 4 - 10, 2014 · 2014-08-04T14:32:41.107Z · LW · GW

I doubt it is possible to find non-controversial examples of anything, and especially of things plausible enough to be believed by intelligent non-experts, outside of the hard sciences.

If this is true, the only plausible examples would be such as "an infinity cannot be larger than another infinity", "time flows uniformly regardless of the observer", "biological species have unchanging essences", and other intuitively plausible statements unquestionably contradicted by modern hard sciences.

Comment by Alejandro1 on August 2014 Media Thread · 2014-08-02T16:56:37.864Z · LW · GW

Turnabout Confusion is a Daria fanfic that portrays Lawndale High as being as full of Machiavellian plotters as HPMOR!Hogwarts is. Each student is keenly aware of their role in the popularity food chain, and most are constantly scheming on how to advance on it. When Daria and Quinn exchange roles for a few days on a spontaneous bet, they unwittingly set a chain reaction of plots and counterplots, leading to a massive Gambit Pileup that could overturn completely the whole social order of the school…

Part One: We All Fall Down.

Part Two: All the King's Horses.

Comment by Alejandro1 on Politics is hard mode · 2014-07-24T13:01:26.426Z · LW · GW

One thing that doesn't quite fit is this: If you are the weaker side, how is it possible that you come and bully me, and expect me to immediately give up? This doesn't seem like a typical behavior or weaker people surrounded by stronger people. (Possible explanation: This side is locally strong here, for some definition of "here", but the enemy side is stronger globally.)

Another explanation could be that the side is dominant in one form of battle (moralizing) but weak at another kind (economic power, prestige, literal battle) and wish to play to their strengths.

See also Yvain on social vs. structural power.

Comment by Alejandro1 on Jokes Thread · 2014-07-24T10:54:02.822Z · LW · GW

More succinctly: I am rational, you are biased, they are mind-killed.

Comment by Alejandro1 on Jokes Thread · 2014-07-24T09:40:55.137Z · LW · GW

"However, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma", however, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma.

Comment by Alejandro1 on Forecasting rare events · 2014-07-12T15:05:28.670Z · LW · GW

Another type of rare event, not as important as the ones you discuss, but with a large community devoted to forecasting its odds: major upsets in sports, like the recent Germany-Brazil 7-1 blowout in the World Cup. Here is a 538 article discussing it and comparing it to other comparable upsets in other sports.

Comment by Alejandro1 on Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild" · 2014-07-08T15:31:21.605Z · LW · GW

The mention of the Sokal paper reminds me that Derrida (who is frequently associated with the po-mo authors parodied by Sokal, although IIRC he was not targeted directly) was basically a troll, making fun of conventional academic philosophy in a similar way than Will makes fun of conventional LW thought. I wonder if Will has read Derrida…?