Posts

Comments

Comment by Paul_Crowley on Informers and Persuaders · 2009-02-11T13:54:07.000Z · LW · GW

I've read countless papers on crypto, and they mostly seem pretty formal to me - what are people comparing them to? Is it really worse in other fields? There is some variation - DJB's style is distinctly less formal than other authors - but my perception is that papers on for example network engineering seem a lot less formal than crypto papers. I think there's plenty of room to improve the readability of crypto papers by encouraging less formality.

One trivial example of signalling here is the way everyone still uses the Computer Modern font. This is a terrible font, and it's trivial to improve the readability of your paper by using, say, Times New Roman instead, but Computer Modern says that you're a serious academic in a formal field.

Comment by Paul_Crowley on Use the Try Harder, Luke · 2008-10-02T12:12:22.000Z · LW · GW

I'd have more sympathy with Luke (and thus more forgiveness for Lucas) if instead of the whole X-Wing moving when he tries it, we see a much less dramatic effect; perhaps aerials that were drooping stand up, or the flaps lift gently, or some such.

However, in such films the plausibility of the character's behaviour is always sacrificed in the interests of better visuals, or better drama; cf the zillion ludicrous excuses scriptwriters present for characters not telling each other what's going on.

Comment by Paul_Crowley on The Truly Iterated Prisoner's Dilemma · 2008-09-05T17:10:02.000Z · LW · GW

Regret of rationality in games isn't a mysterious phenomenon. Let's suppose that after the one round of PD we're going to play I have the power to destroy a billion paperclips at the cost of one human life, and Clippy knows that. If Clippy thinks I'm a rational outcome-maximizer, then he knows that whatever threats I make I'm not going to carry out, because they won't have any payoffs when the time comes. But if it thinks I'm prone to irrational emotional reactions, it might conclude I'll carry out my billion-paperclip threat if it defects, and so cooperate.

Comment by Paul_Crowley on When (Not) To Use Probabilities · 2008-07-23T11:41:34.000Z · LW · GW

If I could prevent only one of these events, I would prevent the lottery.

I'm assuming that this is in a world where there are no payoffs to the LHC; we could imagine a world in which it's decided that switching the LHC on is too risky, but before it is mothballed a group of rogue physicists try to do the riskiest experiment they can think of on it out of sheer ennui.

Comment by Paul_Crowley on I'd take it · 2008-07-02T09:02:36.000Z · LW · GW

In what context is $10 trillion not a huge amount of money? It's approximately the entire US national debt, but the difference is nearly enough to pay off the entire debt of the third world; it's what the UK governent spends in ten years. If I had that kind of wealth, after I'd cleared all third world debts, I'd carpet developing nations everywhere with infrastructure like roads and such, and pay for literacy and clean water everywhere, and I'd still have money left over.

Comment by Paul_Crowley on Decoherence is Simple · 2008-05-07T11:22:21.000Z · LW · GW

Nick Tarleton: sadly, it's my experience that it's futile to try and throw flour over the dragon.

Comment by Paul_Crowley on Decoherence is Simple · 2008-05-06T09:53:17.000Z · LW · GW

Tomorrow I will address myself to accusations I have encountered that decoherence is "unfalsifiable" or "untestable", as the words "falsifiable" and "testable" have (even simpler) probability-theoretic meanings which would seem to be violated by this usage.

Doesn't this follow trivially from the above? No experiment can determine whether or not we have souls, but that counts against the idea of souls, not against the idea of their absence. If decoherence is the simpler theory, then lack of falsifiability counts against the other guys, not against it.

Comment by Paul_Crowley on The Born Probabilities · 2008-05-01T11:48:04.000Z · LW · GW

Roland: yes, at least one. Where did you give up and why?

Comment by Paul_Crowley on Belief in the Implied Invisible · 2008-04-08T10:15:57.000Z · LW · GW

This is what I thought at first, but on reflection, it's not quite right.

Could you explain a little more the distinction between the position preceding this remark and that following it? They seem like different formulations of the same thing to me.

Comment by Paul_Crowley on Zombies! Zombies? · 2008-04-04T11:34:03.000Z · LW · GW

Heterophenomenology!

Sorry, I thought it needed saying.

Comment by Paul_Crowley on Newcomb's Problem and Regret of Rationality · 2008-02-01T14:31:55.000Z · LW · GW

Caledonian: you can stop talking about wagering credibility units now, we all know you don't have funds for the smallest stake.

Ben Jones: if we assume that Omega is perfectly simulating the human mind, then when we are choosing between B and A+B, we don't know whether we are in reality or simulation. In reality, our choice does not affect the million, but in the simulation this will. So we should reason "I'd better take only box B, because if this is the simulation then that will change whether or not I get the million in reality".

Comment by Paul_Crowley on Circular Altruism · 2008-01-22T18:29:40.000Z · LW · GW
  1. 400 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

ITYM 1. 100 people die, with certainty.

Comment by Paul_Crowley on Against Discount Rates · 2008-01-21T11:33:12.000Z · LW · GW

Obviously there's another sort of discounting that does make sense. If you offer me a choice of a dollar now or $1.10 in a year, I am almost certain you will make good on the dollar now if I accept it, whereas there are many reasons why you might fail to make good on the $1.10. This sort of discounting is rationally hyperbolic, and so doesn't lead to the paradoxes of magnitude over time that you highlight here.

Comment by Paul_Crowley on Guardians of Ayn Rand · 2007-12-19T12:47:09.000Z · LW · GW

Some have said this essay is a poor, ad hominem criticism of Objectivism. This isn't a criticism of Objectivism per se at all and isn't meant to be - it is intended to answer the question "how did a belief that ostensibly venerates reason and independent thought give rise to cult-like behaviour?" Thus discussion of the merits of Objectivism itself don't address the question, while an account of Rand's life sheds a lot of light.

Comment by Paul_Crowley on Guardians of Ayn Rand · 2007-12-18T11:26:51.000Z · LW · GW

And of course, Eleizer has already quoted the scripture of the prophet Brian, who sayeth:

"Look. You've got it all wrong. You don't need to follow me. You don't need to follow anybody! You've got to think for yourselves. You're all individuals! You're all different! You've all got to work it out for yourselves! Don't let anyone tell you what to do!" (Life of Brian, scene 19)

Comment by Paul_Crowley on Reversed Stupidity Is Not Intelligence · 2007-12-14T12:35:49.000Z · LW · GW

If this is the same Caledonian who used to post to the Pharyngula blog, he's barred from there now with good reason.

Is there a cognitive bias at work that makes it hard for people not to feed trolls?

Comment by Paul_Crowley on New Improved Lottery · 2007-12-11T14:41:37.000Z · LW · GW

_Gi: you have described exactly my lottery strategy, as well as that of Patti Smith:

Every night before I go to sleep I find a ticket, win a lottery Scoop them pearls up from the sea Cash them in and buy you all the things you need...

Comment by Paul_Crowley on Fake Utility Functions · 2007-12-06T18:27:37.000Z · LW · GW

It may be that I need to read one of those links in the previous post, but - I tend to imagine that AIs will need to have upbringings of some sort. We acquire morality much as we acquire knowledge - does it suffice for the AIs to do the same?