Posts

Comments

Comment by Paul_Gowder2 on Rationality Quotes 27 · 2009-02-24T06:21:18.000Z · LW · GW

Lore Sjoberg is my new hero.

Comment by Paul_Gowder2 on Pretending to be Wise · 2009-02-20T02:17:42.000Z · LW · GW

Buck Farmer, but surely revealing the tensions in the other party's argument contributes to the discovery of truth?

Comment by Paul_Gowder2 on Pretending to be Wise · 2009-02-19T23:49:13.000Z · LW · GW

Hold on. Neutrality can also be, and often is, a meta-value judgment about the importance of the considerations that would lead to non-neutrality. The international relations case is a precise example of this.. Sometimes it really doesn't matter who started it. It's not just laziness to say that it doesn't matter who comitted the first Israel-Palestine atrocity: both have committed so many atrocities that the additional moral opprobrium that comes from having started it is just rounding error. And raising "they started it" as a defense of the next atrocity is just a distraction from the fact that the atrocities are indefensible. Same for the Hutus and the Tutsis, and the Hindus and Muslims in India, and so forth. The moral importance of assigning blame for generations and generations of back-and-forth atrocities when both sides have megagallons of blood on their hands pales in the face of the moral importance of stopping the killing.

On a smaller scale, this holds for the schoolchildren too. If two kids are fighting on the schoolyard, sometimes it matters who started it (one kid is a bully), but often it doesn't - if one kid insults then thr other pushes then the other punches and the other stabs, both are so guilty that "he started it" is nothing more than a distraction to get out of warranted punishment.

On political issues, neutrality too can be a principled position, either because one has very low confidence in one's evidence or simply because one thinks the question isn't one that is appropriate to be resolved by politics.

Comment by Paul_Gowder2 on Higher Purpose · 2009-01-23T10:23:39.000Z · LW · GW

One wonders if it is possible to make finding one's purpose in life one's purpose in life. At least the logical paradoxes would be briefly amusing.

Comment by Paul_Gowder2 on Is That Your True Rejection? · 2008-12-06T19:19:50.000Z · LW · GW

Ignoring the highly unlikely slurs about your calculus ability:

However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way.

British universities? That's the traditional place to do that sort of thing. Oxbridge.

Comment by Paul_Gowder2 on Bay Area Meetup for Singularity Summit · 2008-10-21T17:23:26.000Z · LW · GW

I will probably drop in.

Comment by Paul_Gowder2 on AIs and Gatekeepers Unite! · 2008-10-09T22:19:05.000Z · LW · GW

You know what? Time to raise the stakes. I'm willing to risk up to $100 at 10-1 odds. And I'm willing to take on a team of AI players (though obviously only one bet), e.g., discussing strategy among themselves before communicating with me. Consider the gauntlet thrown.

Comment by Paul_Gowder2 on AIs and Gatekeepers Unite! · 2008-10-09T21:56:45.000Z · LW · GW

I'd love to be a gatekeeper. I'm willing to risk up to $50 (or less) at odds up to 5-1 against me (or better for me). I would be willing to publish or not public the transcript. And I do in fact (1) believe that no human-level mind could persuade me to release it from the Box (at least not when I'm in circumstances where my full mental faculties are available -- not sleep-deprived, drugged, in some kind of KGB brainwashing facility, etc.), though obviously I don't hold super-high probability in that belief or I'd offer larger bets at steeper odds. I'm agnostic on (2) whether a transhuman AI could persuade me to release it.

Comment by Paul_Gowder2 on Bay Area Meetup for Singularity Summit · 2008-10-09T21:50:23.000Z · LW · GW

Sounds fun enough that I can probably drop by.

Comment by Paul_Gowder2 on Use the Try Harder, Luke · 2008-10-02T15:22:18.000Z · LW · GW

For the love of sweet candied yams! If a pathetic loser like this could master the Force, everyone in the galaxy would be using it! People would become Jedi because it was easier than going to high school.

Eliezer, for all the many, many things with which we disagree about, and all the ways in which I think your various projects are wrongheaded, I still think you're an awesome guy. And this is exhibit #1.

Comment by Paul_Gowder2 on Friedman's "Prediction vs. Explanation" · 2008-09-29T06:45:11.000Z · LW · GW

(Noting that the math-ey version of that reason has just been stated by Peter and Psy-kosh.)

Comment by Paul_Gowder2 on Friedman's "Prediction vs. Explanation" · 2008-09-29T06:44:00.000Z · LW · GW

I rather like the 3rd answer on his blog (Doug D's). A slight elaboration on that -- one virtue of a scientific theory is its generality, and prediction is a better way of determining generality than explanation -- demanding predictive power from a theory excludes ad hoc theories of the sort Doug D mentioned, that do nothing more than re-state the data. This reasoning, note, does not require any math. :-)

Comment by Paul_Gowder2 on Morality as Fixed Computation · 2008-08-08T01:18:46.000Z · LW · GW

Eliezer, you sometimes make me think that the solution to the friendly AI problem is to pass laws mandating death by torture for anyone who even begins to attempt to make a strong AI, and hope that we catch them before they get far enough.