## Posts

Comment by alexg on Eliezer Yudkowsky Facts · 2014-03-10T08:55:45.035Z · score: 0 (0 votes) · LW · GW

I can't believe that this one hasn't been done before:

Unless you are Eliezer Yudkowsky, there are 3 things that are certain in life: death, taxes and the second law of thermodynamics.

Comment by alexg on Prisoner's Dilemma Tournament Results · 2013-11-27T04:22:43.597Z · score: 0 (0 votes) · LW · GW

Here's a very fancy cliquebot (with a couple of other characteristics) that I came up with. The bot is in one of 4 "modes" -- SELF, HOSTILE, ALLY, PASSWORD.

Before turn1:

Simulate the enemy for 20 turns against DCCCDDCDDD Cs thereafter, If the enemy responds 10Ds, CCDCDDDCDC then change to mode SELF. (These are pretty much random strings -- the only requirement is that the first begins with D)

Simulate the enemy for 10 turns against DefectBot. If the enemy cooperates in all 10 turns, change to mode HOSTILE. Else be in mode ALLY.

In any given turn,

If in mode SELF, cooperate always.

If in mode HOSTILE, defect always.

If in mode ALLY, simulate the enemy against TFT for the next 2 turns. If the enemy defects on either of these turns, or defected on the last turn, switch to mode HOSTILE and defect. Exception if it is move 1 and the enemy will DC, then switch to mode PASSWORD and defect. Defect on the last move.

If in mode PASSWORD, change to mode HOSTILE if the enemy's moves have varied from DCCCDDCDDD Cs thereafter beginning from move 1. If so, switch to mode HOSTILE and defect. Else defect on moves 1-10, on moves 11-20 CCDCDDDCDC respectively and defect thereafter.

Essentially designed to dominate in the endgame.

Comment by alexg on Diseased thinking: dissolving questions about disease · 2013-11-13T12:41:29.585Z · score: 0 (0 votes) · LW · GW

Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.

Comment by alexg on Welcome to Less Wrong! (5th thread, March 2013) · 2013-11-13T12:33:03.074Z · score: 2 (2 votes) · LW · GW

G'day

As you can probably guess, I'm Alex. I'm a high school student from Australia and have been disappointed with the education system here from quite some time.

I came to LW via HPMoR which was linked to me by a fellow member of the Aus IMO team. (I seriously doubt I'm the only (ex-)Olympian around here - seems just the sort of place that would attract them). I've spent the past few weeks reading the sequences by EY, as well as miscellaneous other stuff. Made a few (inconsequential) posts too.

I have very little in the way of controversial opinions to offer (relative to the demographics of this site) as just about all the unusual positions it takes I already agreed with (e.g. athiesm) or seemed pretty obvious to me after some thought (e.g. transhumanism). Maybe it's just hindsight bias.

I'm slightly disappointed with the ban on political discussion. I do agree that it should not be mentioned when not relevant but it seems a shame to waste this much rationality in one place by forbidding them to use it where it's most needed. A possible compromise would be to create a politics dicussion page to discuss pros and cons to particular ideologies. (If one already exists point me to it). A reason cited is that there are other sites to discuss politics - if any do so rationally I'd like to see them.

It is a relief to be somewhere where I don't have to constantly take into account inferential distance, and I shall try to make the most of this. I only resolve to write just that which has not been written.

Comment by alexg on Diseased thinking: dissolving questions about disease · 2013-11-13T11:51:27.373Z · score: 0 (0 votes) · LW · GW

You're kind of missing the point here. I probably should have clarified my position more The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the offender is, by assumption, unlikely to reoffend, the everyone else's criminal behaviors are the main factor here, and these are minimised through the justice system's reputation. (I also should have added the assumption that attempts to convince people of the truth have failed). By prosecuting X you are acheiving this purpose. The Least Convenient Possible World is the one where there's not a third way, or additional factor (I hadn't thought of) that lets you get out of this.

Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning!

EDIT: Grammer EDIT: The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.

Comment by alexg on Diseased thinking: dissolving questions about disease · 2013-11-13T10:50:29.294Z · score: 0 (0 votes) · LW · GW

Test for Consequentialism:

Suppose you are a judge in deciding whether person X or Y commited a murder. Let's also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won't reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y? (Before you quibble your way out of this, read The Least Convenient Possible World)

If you said X, you pass.

Just a response to "Saddam Hussein doesn't deserve so much as a stubbed toe."

N.B. This does not mean I'm against consequentialism.

Comment by alexg on Newcomb's Problem and Regret of Rationality · 2013-11-12T23:03:39.010Z · score: 1 (1 votes) · LW · GW

I'm not sure if anyone's noticed this, but how do you know that you're not a simulation of yourself inside Omega? If he is superintelligent, he would compute your decision by simulating you, and you and your simulation will be indisinguishable.

This is fairly obviously a PD against said simulation -- if you cooperate in PD, you should one-box.

I personally am not sure, although if I had to decide I'd probably one-box