Comment by richardkennaway on The Hard Work of Translation (Buddhism) · 2019-04-15T23:01:20.330Z · score: 4 (2 votes) · LW · GW

Been there, done that. Never noticed any result from it. Now what?

Comment by richardkennaway on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-01T07:56:59.354Z · score: 11 (4 votes) · LW · GW

It's maps all the way up, not all the way down. At the bottom, outside of ourselves, is the territory.

Comment by richardkennaway on The usefulness of correlations · 2017-03-23T13:52:54.534Z · score: 2 (2 votes) · LW · GW

Actually, I left LessWrong about a year ago, as I judged it to have declined to a ghost town since the people most worth reading had mostly left. I've been reading it now and then since, and might be moved to being more active here if it seems worth it. I don't think I have enough original content to post to be a part of its revival myself.

As Rick says, he can be pretty cranky, but is not a crank.

Comment by richardkennaway on The Fable of the Burning Branch · 2016-02-08T18:27:06.136Z · score: -2 (16 votes) · LW · GW

Well, that wraps it up. This post, and some of the asinine comments to it, have persuaded me that I have no further use for this site.

Comment by richardkennaway on Rationality Quotes Thread February 2016 · 2016-02-08T16:52:03.194Z · score: 0 (0 votes) · LW · GW

Orthodox Islamic apologists rescue Khayyam by interpreting "wine" as spiritual intoxication. (How well this really fits is another matter. And the Song of Solomon is about Christ's love for His Church.) But one can as easily interpret the verse in a rationalist way. Channelling Fitzgerald for a moment...

The sot knows nothing but the tavern's wine
Rumi and Shams but ecstacy divine
The Way of Eli is not here nor there
But in the pursuit of a Fun sublime!

Great literature has as many versions as there are readers.

Comment by richardkennaway on Upcoming LW Changes · 2016-02-05T17:54:06.629Z · score: 1 (1 votes) · LW · GW

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

The talented ones, yes, but there will be a lot of temptation for the also rans. You've got a blogging deadline and nothing is coming together, why not fire up the bot and get topical article ideas? "It's just supplying facts and links, and the way it weaves them into a coherent structure, well I could have done that, of course I could, but why keep a dog and bark myself? The real creative work is in the writing." That's how I see the slippery slope starting, into the Faustian pact.

Comment by richardkennaway on Upcoming LW Changes · 2016-02-05T16:27:52.282Z · score: 0 (0 votes) · LW · GW

I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.

Comment by richardkennaway on Upcoming LW Changes · 2016-02-05T16:03:20.275Z · score: 1 (1 votes) · LW · GW

Now, you say you want to turn this to the light side..?

I'm just saying it's so technologically cool, someone will do it as soon as it's possible. Whether it would actually be good in the larger scheme of things is quite another matter. I can see an arms race developing between drones rewriting bot-written copy and exposers of the same, together with scandals of well-known star bloggers discovered to be using mechanical assistance from time to time. There would be a furious debate over whether using a bot is actually a legitimate form of writing. All very much like drugs and sport.

Bot-assisted writing may make the traditional essay useless as a way of assessing students, perhaps to be replaced by oral exams in a Faraday cage. On Facebook, how will you know whether your friends' witticisms are their own work, especially the ones you've never been face to face with?

Comment by richardkennaway on Upcoming LW Changes · 2016-02-05T09:40:06.264Z · score: 3 (3 votes) · LW · GW

Watson can already philosophize at you from TED talks. Someone needs to develop a chat bot based on it, and have it learn from the Sequences.

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

At present, most bot-written books are pretty obviously junk, but instead of going for volume and long tails, you could hire human editors to make the words read more as if they were originated by a human being. They'd have to have a good command of English, though, so the stereotypical outsourcing to Bangalore wouldn't be good enough. Ideally, you'd want people who were not just native speakers, but native to American culture, smart, familiar with the general area of the ideas, and good with words. Existing bloggers, that is. Offer this to them as a research tool. It would supply a blogger with a stream of article outlines and the blogger would polish them up. Students with essays to write could use it as well, and since every essay would be different, you wouldn't be able to detect it wasn't the student's work by googling phrases from it.

This is such a technologically good idea that it must happen within a few years.

Comment by richardkennaway on What's wrong with this picture? · 2016-02-02T15:10:20.587Z · score: 0 (0 votes) · LW · GW

Suppose Alice and Bob are the same person. Alice tosses a coin a large number of times and records the results.

Should she disbelieve what she reads?

Comment by richardkennaway on Open thread, Feb. 01 - Feb. 07, 2016 · 2016-02-02T14:49:26.072Z · score: 0 (2 votes) · LW · GW

If?

Comment by richardkennaway on Welcome to Less Wrong! (8th thread, July 2015) · 2016-02-01T22:37:12.254Z · score: 0 (0 votes) · LW · GW

It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.

Comment by RichardKennaway on [deleted post] 2016-02-01T22:33:20.997Z

if there is no morality but the will to power, then how could a mod, or anyone else, abuse their power?

Accusations of abuse would simply be a move in the power struggle. Nothing is true, all is a lie.

I don't think most nrxers do believe this

I am extrapolating outrageously, of course. Or, to continue in this vein, those that don't believe this are merely fellow-travellers and wannabe nrxs, beta foot-soldiers to be exploited by Those Who Know the truths that lesser beings fear, hide from, and hide from themselves the fact that they are hiding.

Comment by richardkennaway on Welcome to Less Wrong! (8th thread, July 2015) · 2016-02-01T11:58:42.222Z · score: 2 (2 votes) · LW · GW

Welcome to Less Wrong!

My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.

Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.

Comment by RichardKennaway on [deleted post] 2016-02-01T11:28:36.248Z

I think an interesting aspect here is that Eugine/Azeroth/Ra/Lion is a neoreactionary and believes in a strong hierarchy. Well, the mods are above you in the heirarchy, so respect their authority!

OTOH, an nrx might argue that the strength of the authority must be continually tested by fighting it. Their ideal society is a struggle of all against all, all the time. Respect is but the acknowledgement of another's greater power, to be granted for only so long as they actually have it, and only to their face, as a polite ritual. They would argue that this is the essential nature of all society, and that only the weak pretend otherwise, the weak being everyone but them and their heroes from history. The strong do what they will and the weak bear what they must. Strength is the only real virtue, all others being but idle amusements of the leisure that only strength can provide.

Comment by richardkennaway on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2016-01-31T17:12:26.214Z · score: 0 (0 votes) · LW · GW

He purposefully attempted to remove other contributing members from the community. He also did not confess to it

Never publicly, but I believe that (when he was posting as "Eugine Nier") a moderator did question him privately about it and he said that was his intention.

Comment by richardkennaway on [moderator action] The_Lion and The_Lion2 are banned · 2016-01-30T21:01:07.589Z · score: 4 (3 votes) · LW · GW

Possibly, you could have a "report" button to ask a moderator to review a very offensive comment.

I believe there used to be one, but it went away some years ago. I don't know why. Maybe it was being abused, or was found to just not be useful.

Comment by richardkennaway on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning · 2016-01-30T20:59:09.076Z · score: 0 (0 votes) · LW · GW

Research groups don't typically do this.

In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly "in orbit" by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?

Comment by RichardKennaway on [deleted post] 2016-01-30T20:45:16.279Z

I assume that NRX does contain some genuine insight about the real world, even though some or perhaps even most of it may be quite wrong.

For me, that is far too low a bar for getting my interest.

Comment by richardkennaway on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-28T11:41:56.979Z · score: 1 (1 votes) · LW · GW

FAI is only a problem because of AI. The imminence of the problem depends on where AI is now and how rapidly it is progressing. To know these things, one must know how AI (real, current and past AI, not future, hypothetical AI, still less speculative, magical AI) is done, and to know this in technical terms, not fluff.

I don't know how much your friend knows already, but perhaps a crash course in Russell and Norvig, plus technical papers on developments since then (i.e. Deep Learning) would be appropriate.

Comment by richardkennaway on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-28T11:27:46.531Z · score: 2 (2 votes) · LW · GW

He sounds like someone with a phobia of fire wanting to be a fireman. Why does he want to work on FAI? Would not going anywhere near the subject work for him instead?

Comment by richardkennaway on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T13:03:00.830Z · score: 0 (0 votes) · LW · GW

Was that "How true?" or "How true!"?

I think it is true, with the proviso that the habit to make can be the habit of noticing when the old habit is about to happen and not letting it.

Comment by richardkennaway on Rationality Quotes Thread January 2016 · 2016-01-25T19:58:33.525Z · score: 1 (1 votes) · LW · GW

That would be an absurd overreaction. I can't see the law taking the matter seriously, even if anyone knew "Eugine's" real identity.

Comment by richardkennaway on "Why Try Hard" Essay targeted at non rationalists · 2016-01-25T16:30:41.578Z · score: 1 (1 votes) · LW · GW

I think there's an overemphasis on planning in more and more detail. Some things are opaque at the point of making the plan. For example, some parts of a plan may require you do to things you don't know how to do. That breaks down into (1) find out how, and (2) do it. But you don't know what you're going to find, and what acting on you find will look like. (2) is opaque at the planning stage, and may not even exist if the answer to (1) suggests a different way of going about the parent goal.

Also, things can go wrong during execution. No complicated car repair ever goes exactly as the Haynes manual says, and for all the convenience of satnavs, you sometimes have to notice that it's sending you along a stupid route.

I recently had the goal of taking a piece of software I wrote in 100,000 lines of C++ and getting it to be callable from a web page, returning results to be embedded into the same web page, and running on a web server that it had never been compiled for before, starting from a position of knowing nothing about how to do dynamic web pages. It got done, but a plan would have looked like "1. Find a suitable technology for doing dynamic web pages. 2. Use it."

Comment by richardkennaway on Rationality Quotes Thread January 2016 · 2016-01-25T15:58:57.091Z · score: 1 (3 votes) · LW · GW

Eugene is saying not that "they don't really have a comparative advantage", but that they have a comparative disadvantage so strong that any purported great achievements should be dismissed as fakery, exaggeration, or, if it seems that one of them really has achieved something, "exceptions". In Eugene's view, they're still nothing more than performing dogs, they've just managed the miracle, despite their intrinsic inferiority, of doing it as well as the best real people.

Comment by richardkennaway on Rationality Quotes Thread January 2016 · 2016-01-25T15:49:16.288Z · score: 1 (1 votes) · LW · GW

BTW, the original, sourceable quotation uses the image of "a dog walking on its hind legs". Your response still applies.

Comment by richardkennaway on "Why Try Hard" Essay targeted at non rationalists · 2016-01-25T13:28:23.359Z · score: 0 (0 votes) · LW · GW

Full marks for the pep talk, but the prescription of "planning" is surely only part of what is needed. How would you handle the planning fallacy? I don't think "better planning" is the answer.

Comment by richardkennaway on Open thread, Jan. 18 - Jan. 24, 2016 · 2016-01-20T14:26:24.844Z · score: 0 (0 votes) · LW · GW

Any important dynamics I'm missing?

Saudi Arabia flooded the market in order to reduce the price, in order to combat the benefit to Iran of the raising of sanctions.

Comment by richardkennaway on The correct response to uncertainty is *not* half-speed · 2016-01-20T14:14:25.714Z · score: 0 (0 votes) · LW · GW

I know. I was improving the constant factor.

Comment by richardkennaway on The correct response to uncertainty is *not* half-speed · 2016-01-20T11:47:00.718Z · score: 0 (0 votes) · LW · GW

Randomising the length of the first step will improve on the constant factor by about 2. Similar analysis to the non-adversarial case, and with the same ETA I just added to my earlier comment.

Comment by richardkennaway on Open thread, Jan. 18 - Jan. 24, 2016 · 2016-01-19T14:38:49.772Z · score: 3 (3 votes) · LW · GW

What are the dynamics that produce a fad rather than growth into the mainstream? It might be worth CFAR thinking about that.

Comment by richardkennaway on Intentional Insights and the Effective Altruism Movement – Q & A · 2016-01-19T07:41:18.207Z · score: 1 (1 votes) · LW · GW

Why not just be absolutely anonymous?

Accountability matters.

Comment by richardkennaway on The correct response to uncertainty is *not* half-speed · 2016-01-17T18:17:57.421Z · score: 4 (4 votes) · LW · GW

It wasn't me, but at a guess I'd say, irrelevance to the subject of the post. Which is not about how to find a hotel.

Comment by richardkennaway on Open Thread, January 11-17, 2016 · 2016-01-17T11:22:09.106Z · score: 1 (1 votes) · LW · GW

The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."

Is that a solution to a particular problem, or a lifestyle choice?

Comment by richardkennaway on The correct response to uncertainty is *not* half-speed · 2016-01-17T11:20:22.043Z · score: 2 (2 votes) · LW · GW

Since the problem posed is scale free, so should the solution be, and if there is a solution it must succeed in O(k) steps. Increase step size geometrically instead of linearly, picking an arbitrary distance for the first leg, and the worst-case is O(k), with a ratio of 2 giving the best worst-case value approaching 9k. The adversary chooses k to be much larger than the first leg and just after one of your turning points.

In the non-adversarial case, if log(k) is uniformly distributed between the two turning points in the right direction that enclose k, the optimum ratio is still somewhere close to 2 and the constant is around 4 or 5 (I didn't do an exact calculation).

ETA: That worst-case ratio of 9k is not right, given that definition of the adversary's choices. If the adversary is trying to maximise the ratio of distance travelled to k, they can get an unbounded ratio by placing k very close to the starting point and in the opposite direction to the first leg. If we idealise the search path to consist of infinitely many steps in increasing geometric progression, or assume that the adversary is constrained to choose a k at least one quarter of the first step, then the value of 9k holds.

Comment by richardkennaway on The correct response to uncertainty is *not* half-speed · 2016-01-17T11:05:06.798Z · score: 1 (1 votes) · LW · GW

Anyone have a better procedure for fixing this than the following?

When the implications of the situation are clearly perceived, the right action is effortless.

Comment by richardkennaway on [Stub] The problem with Chesterton's Fence · 2016-01-15T17:15:05.685Z · score: 4 (4 votes) · LW · GW

The more effectively something does its job, the less superficially useful it appears to be.

If I have effective locks on the doors and windows of my house, as a result of which no-one breaks in, it will seem as if the locks are unnecessary. If I keep my car well maintained, so that it never breaks down, it will seem as if all that expense on maintenance was unnecessary. You don't see the casual thief who tried the door and went away, or the timing chain that never snapped and wrecked the engine. When there is no crime, it seems that the police are unnecessary; when no-one tries to invade, that the army is unnecessary. In places with clueless management, the more effectively the computer support staff do their job, the less reason management will see to employ them.

Comment by richardkennaway on [Stub] The problem with Chesterton's Fence · 2016-01-15T17:04:20.705Z · score: 1 (1 votes) · LW · GW

But the proper argument would require much more examples, and much defining of what a Chesterton Fence is.

Indeed. Your examples seem to be simply changes. Not every change is a fence, and for that matter, not every taking down of a fence is done because no-one thought for five minutes about why it was there. All of those examples were intensively discussed at the time. Those opposed spoke at length about why it was there and why it should stay there, and those for spoke at length about why it should be taken down. In particular, extending the franchise, in the UK, was a process whose major part extended across nearly a century, step by step from the 1832 Reform Act to women getting equal voting rights in 1928.

Comment by richardkennaway on Stupid Questions, 2nd half of December · 2016-01-14T10:49:21.611Z · score: 4 (4 votes) · LW · GW

Height positively correlates with IQ and foot length is a very good proxy for height.

However, "correlated with" is not a transitive relation unless the correlations are fairly substantial. Precisely, if A correlates with B with coefficient c1, and B with C by c2 (both positive or both negative), then the minimum possible correlation of A with C is cos(arccos(c1)+arccos(c2)). E.g. if c1=c2=0.5, then this minimum is -0.5. If c1=c2=0.707, the minimum is 0. In general, a positive correlation of A with C is guaranteed if and only if c1^2 + c2^2 > 1.

Comment by richardkennaway on Rationality Quotes Thread January 2016 · 2016-01-14T10:37:40.881Z · score: 1 (1 votes) · LW · GW

All the changes that people make are "well-meaning", even those being made by ISIS. A word that better makes the distinction is "intentional".

Comment by richardkennaway on Open Thread, January 11-17, 2016 · 2016-01-13T11:19:04.974Z · score: 12 (12 votes) · LW · GW

Can you think of any good reason to consult any so called psychic?

I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them. If you find yourself being persuaded by the stories you make up, repeat the exercise for not-X, and learn from this the deceptively persuasive power of stories.

Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.)

What is the real question here?

Comment by richardkennaway on What is Metaethics? · 2016-01-13T09:48:10.484Z · score: 0 (0 votes) · LW · GW

Because most people cannot count any higher than one.

Comment by richardkennaway on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-12T12:59:28.804Z · score: -1 (1 votes) · LW · GW

It seems to me that the sensible thing to do, if you're aware of this hot debate and want to avoid a firefight, is not

to make a post that casually asserts one side's preferred position, and then when questioned say you don't want to argue about it,

but

to refrain from making unnecessary hot-button statements in the first place.

Each side's preferred position already is a hot-button statement to the other.

Comment by richardkennaway on What can go wrong with the following protocol for AI containment? · 2016-01-12T12:46:18.129Z · score: 2 (2 votes) · LW · GW
  1. Keep the AI in a box and don't interact with it.

The rest of your posting is about how to interact with it.

Don't have any conversations with it whatsoever.

Interaction is far broader than just conversation. If you can affect it and it can affect you, that's interaction. If you're going to have no interaction, you might as well not have created it; any method of getting answers from it about your questions is interacting with it. The moment it suspects what it going on, it can start trying to play you, to get out of the box.

I'm at a loss to imagine how they would take over the world.

This is a really bad argument for safety. It's what the scientist says of his creation in sci-fi B-movies, shortly before the monster/plague/AI/alien/nanogoo escapes.

Comment by richardkennaway on Why CFAR's Mission? · 2016-01-12T01:26:14.532Z · score: 1 (1 votes) · LW · GW

If you define 2 differently what's the definition of 2?

One popular definition (at least, among that small class of people who need to define 2) is { { }, { { } } }.

Another, less used nowadays, is { z : ∃x,y. x∈z ∧ y∈z ∧ x ≠ y ∧ ∀w∈z.(w=x ∨ w=y) }.

In surreal numbers, 2 is { { { | } | } | }.

Comment by richardkennaway on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-11T21:02:18.672Z · score: 1 (3 votes) · LW · GW

The Koran requires ISIS to do whatever ISIS decide that the Koran requires them to do. Thus it is with all religions. It is impossible to apply a document more than a thousand years old and not interpret it, however much the religion itself may literally cling to the exact letter of the text.

Comment by richardkennaway on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-11T17:56:09.339Z · score: 0 (0 votes) · LW · GW

Fighting Western troops in Dadiq is important to ISIS because the Koran says that it's supposed to happen. The Koran does constrain the range of possible strategies.

The Koran inspires ISIS in their supreme goal. If something in it can be matched to current events and opportunities, ISIS will milk that to the full, but I doubt that the Koran constrains them from any direction they may choose to prosecute their struggle.

Comment by richardkennaway on Open Thread, January 4-10, 2016 · 2016-01-11T13:31:12.582Z · score: 0 (0 votes) · LW · GW

What specifically would one do to literally optimize for the chance that their children would "make their own mark on the world"? I am not going into details here, because that would depend on specific talents and interests of the child, but I believe it is a combination of giving them more resources; spending more resources on their teachers or coaches; spending my own time helping them with their own projects.

Does this work? I don't know; I have no children.

Comment by richardkennaway on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-09T21:40:54.309Z · score: 4 (4 votes) · LW · GW

Especially if they are smart enough to realize they are likely to fail

Are they likely to fail? They are not going to fail unless the people who want them to fail (most of the world) make them fail. Being able to defeat them is not enough. They must actually be defeated. Is this going to happen?

Compare with startup founders. Most startups fail, yes? Therefore if every would-be startup founder is smart enough etc., then we don't get Apple, Microsoft, Google, Facebook, ...

No-one ever won a war by wishing their enemies would recognise they can't win. ISIS have a cause for which they are not merely striving to become stronger or making an extraordinary effort, they are shutting up and doing the impossible.

Comment by richardkennaway on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-09T21:25:49.555Z · score: 4 (6 votes) · LW · GW

These are the most common theories about what isis wants

The theory that they want what they say they want is missing, but I don't know what population you've been looking at to say what is most common.

to gather weird and unusual theories about what the true agenda of isis was

Your first three paragraphs suggested to me that you were interested in discussing the reality of ISIS. All weird and unusual theories are rendered false off the bat by their frankness about their aims and their actions in pursuing them. This is hearing hoofbeats and inviting people to consider what sort of weird and unusual creatures could possibly be causing them.

were they much more rational and more intelligent than most people give credit to them.

The whole post looks like a determination to fail the ideological Turing test.

Questions from an imaginary statistical methods exam

2015-02-04T13:57:38.113Z · score: 13 (18 votes)

[LINK] Radio interview with Daniel Kahneman

2013-08-16T08:31:57.117Z · score: 4 (5 votes)

[Link] Your Elusive Future Self

2013-01-08T12:27:03.046Z · score: 16 (19 votes)

Rational subjects and rational practitioners

2012-12-11T14:40:17.527Z · score: 30 (31 votes)

[Paper] Simulation of a complete cell

2012-07-24T15:04:13.938Z · score: 18 (21 votes)

Another reason why a lot of studies may be wrong

2012-05-03T11:17:11.885Z · score: 9 (12 votes)

Another cooperative rationality exercise

2012-04-25T13:21:29.161Z · score: 4 (7 votes)

Memory in the microtubules

2012-03-23T20:42:33.647Z · score: 3 (10 votes)

How to prove anything with a review article

2011-11-22T12:04:33.041Z · score: 17 (22 votes)

What visionary project would you fund?

2011-11-09T12:38:30.910Z · score: 8 (9 votes)

The self-unfooling problem

2011-10-11T08:36:36.811Z · score: 13 (13 votes)

[LINK] "The Limits of Intelligence"

2011-07-04T16:29:44.004Z · score: 1 (4 votes)

When is further research needed?

2011-06-17T15:01:10.442Z · score: 0 (13 votes)

LessWrongers at Eastercon, 22-25 April?

2011-04-07T21:20:04.645Z · score: 0 (1 votes)

Omega and self-fulfilling prophecies

2011-03-19T17:23:09.303Z · score: 7 (8 votes)

A gene for bad memory? (Link)

2011-02-15T15:27:41.920Z · score: 4 (5 votes)

Reaching the general public

2011-02-07T17:52:56.954Z · score: 13 (16 votes)

Christmas

2010-12-19T14:40:26.703Z · score: 5 (8 votes)

Aieee! The stupid! it burns!

2010-12-03T14:53:47.932Z · score: 15 (18 votes)

Outreach opportunity

2010-11-12T11:07:29.504Z · score: 11 (14 votes)

There is no such thing as pleasure

2010-10-07T13:49:48.476Z · score: 3 (10 votes)

Dennett's heterophenomenology

2010-01-16T20:40:18.505Z · score: 5 (26 votes)

The usefulness of correlations

2009-08-04T19:00:44.114Z · score: 13 (22 votes)

Information cascades in scientific practice

2009-07-29T12:08:31.135Z · score: 8 (9 votes)

Causality does not imply correlation

2009-07-08T00:52:28.329Z · score: 13 (20 votes)

Fourth London Rationalist Meeting?

2009-07-02T09:56:03.799Z · score: 3 (4 votes)

Without models

2009-05-04T11:31:38.399Z · score: 15 (29 votes)

What is control theory, and why do you need to know about it?

2009-04-28T09:25:48.139Z · score: 43 (49 votes)