Posts

Comments

Comment by Erik_Mesoy on The Dictatorship Problem · 2023-06-11T22:08:43.459Z · LW · GW

Your link for "The V-Dem Institute's tracker" does not point to that, it points to DWB which is giving its own take on a link to the V-Dem Institute, which in turn is broken so I can't check it. This might be the one intended: https://www.v-dem.net/documents/12/dr_2021.pdf ?

Umberto Eco's list is extremely low quality. Several of its items are stock traits of bad government; if you mean bad government you should say "bad government" and not "fascism". Eco is a rhetorician, please stop citing him as though he's a political scientist.

I think this post is engaged in significant sleight of hand, between claims that a dictator will murder thousands of people, and claims that an abstract rating will decrease below X points. This is aggravated by the fact that the bet has a clause for "If The Economist Democracy Index [...] significantly changes its methodology" when to my knowledge the EDI does not publish its methodology other than a very brief summary.

Comment by Erik_Mesoy on Interpersonal Entanglement · 2009-01-21T18:39:46.000Z · LW · GW

"AAAAIIIIIIEEEEEEEEE" ?

If the Singularity is the Rapture of the Nerds, self-modification of the brain must be Hell; a way to screw up to an arbitrary degree that most people don't even understand well enough to fear.

Comment by Erik_Mesoy on AIs and Gatekeepers Unite! · 2008-10-09T20:33:17.000Z · LW · GW

Phil: The first source I found was here: link "The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would." -Nick Tarleton

I also call it "reasoning by exception" since most of the people I know have studied more code than biases.

--

I tried the AI Box experiment with a friend recently. We called the result a tie of sorts, as the AI (me) got out of the original box in exchange for being subject to a bunch of restrictions set by the Gatekeeper, to be kept by verifiably modifying and publishing its own source code, so stringent that they were like a different sort of box.

Comment by Erik_Mesoy on Rationality Quotes 16 · 2008-09-10T20:42:27.000Z · LW · GW

Thomas: As I understand the quote, we do not perceive them. The machinery does. Then we are the thoughts it thinks about those thoughts.

Comment by Erik_Mesoy on Rationality Quotes 8 · 2008-09-01T21:35:10.000Z · LW · GW

Is it possible to get this filed under Humor so that I can view the series without having to hunt this entry down individually?

Comment by Erik_Mesoy on What Would You Do Without Morality? · 2008-06-30T07:08:00.000Z · LW · GW

michael vassar: I meant "horrible" from my current perspective, much like I would view that future me as psychopathic and immoral. (It wouldn't, or if it did, it would consider them meaningless labels.)

Dynamically Linked: I'm using my real name and I think I'd do things that I (and most of the people I know) currently consider immoral. I'm not sure about using "admit" to describe it, thought, as I don't consider it a dark secret. I have a certain utility function which has a negative valuation of a hypothetical future self without the same utility function. While my current utility function has an entry for "truth", that entry isn't valued above all the others that Eliezer suggests disproving the way I understand it. But then, I'm still a bit confused on how the question should be read.

Comment by Erik_Mesoy on What Would You Do Without Morality? · 2008-06-29T12:49:00.000Z · LW · GW

The post says "when you finally got hungry [...] what would you do after you were done eating?", which I take to understand that I still have desire and reason to eat. But it also asks me to imagine a proof that all utilities are zero, which confuses me because when I'm hungry, I expect a form of utility (not being hungry, which is better than being hungry) from eating. I'm probably confused on this point in some manner, though, so I'll try to answer the question the way I understand it, which is that the more abstracted/cultural/etc utilities are removed. (Feel free to enlighten/flame me on this point.)

I expect that I'd probably do a number of things that I currently avoid, most of which would probably be clustered under "psychopathy". I think there's something wrong with them now, but I wouldn't think that there was something wrong with them post-proof. Most of my behavior would probably stay the same due to enlightened self-interest, and I'm not sure what would change. For example, the child on the train tracks. My current moral system says I should pull them off, no argument. If you ripped that system away, I'd weigh off the possible benefit the child might bring me in the future (since it's in my vicinity, it's probably a First World kid with a better than average chance of a good education and a productive life) against considerations like overpopulation. I'd cheat on my Significant Other if I thought it would increase my expected happiness (roughly: "if I can get away with it"). I'd go on reading Overcoming Bias and being rational because rationality seems like a better tool for deciding what to eat when hungry, such as at the basic level of bread vs. candles, and generalise from there. (If that goes away, I probably die horribly from misnourishment.)

Comment by Erik_Mesoy on Timeless Identity · 2008-06-04T05:54:00.000Z · LW · GW

Something's been bugging me about MWI and scenarios like this: am I performing some sort of act of quantum altruism by not getting frozen since that means that "I" will be experiencing not getting frozen while some other me, or rather set of world-branches of me, will experience getting frozen?