Posts

Comments

Comment by Dmitriy_Kropivnitskiy on The Pascal's Wager Fallacy Fallacy · 2009-03-18T21:24:11.000Z · LW · GW

Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.

On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on being revived in the future (number 3 in Robert Hanson's post) and liking the future enough to want to live there (look at Yvain's comment to this post). Also, putting unconditional high utility value on long life span seems to be a doubtful proposition. I am not sure that life of torture is better than non-existence.

Comment by Dmitriy_Kropivnitskiy on Rationality Quotes 27 · 2009-02-23T21:31:04.000Z · LW · GW

I cannot seem to google the Ryan Lortie quote. Where did that come from?

Comment by Dmitriy_Kropivnitskiy on Three Worlds Decide (5/8) · 2009-02-03T22:42:00.000Z · LW · GW

Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder. In case of Eliezer's story, we are not talking about enforcement of a rule or a bunch of rules, we are talking a permanent change of the whole species on biological, psychological and cultural level. And that, I think, can be safely considered genocide.

Comment by Dmitriy_Kropivnitskiy on Three Worlds Decide (5/8) · 2009-02-03T19:27:00.000Z · LW · GW

Chris, I don't think I am wrong in this. To give an analogy (and yes, I might be anthropomorphizing, but I still think I am right), if someone gives me a lobotomy, I, Dmitriy Kropivnitskiy, will no longer exist, so effectively it would be murder. If Jews are forced to give up Judaism and move out of Israel, there will no longer be Jews as we know them or as they perceive themselves, so effectively this would be genocide.

Comment by Dmitriy_Kropivnitskiy on Three Worlds Decide (5/8) · 2009-02-03T18:40:00.000Z · LW · GW

Well. I guess that stunning the Pilot is a reasonable thing to do, since he is obviously starting to act anti-socially. That is not the point though. Two things strike me as a bit silly, if not outright irrational.

First is about the babyeaters. Pain is relative. In case of higher creatures on earth, we define pain as a stimuli signaling the brain of some damage to the body. Biologically, pain is not all that different from other stimuli, such as cold or heat or just tactile feedback. The main difference seems to be in that we, humans, most of the time, experience pain in a highly negative way. And that is the only point of reference we know, so when humans say that babyeater babies are dying in agony they are making some unwarranted assumptions about the way babies percieve the world. After all, they are structurally VERY different from humans.

Second is about the "help" humans are considering for babyeaters and superhappies are considering for both humans and babyeaters. Basically by changing the babyeaters to not eat babies or to eat unconscious babies, their culture, as it is, is being destroyed. Whatever the result, the resulting species are not babyeaters and babyeaters are therefore dead. So, however you want to put it, it is a genocide. Same goes for humans modified to never feel pain and eat hundreds of dumb children. Whatever those resulting creatures are, they are no longer human either biologically, psychologically or culturally and humans, as a race, are effectively dead.

The problem seems to be that humans are not willing to accept any solution that doesn't lead to the most efficient and speedy stoppage of baby eating. That is, any solution where babyeaters will continue to eat babies for any period of time is considered inferior to any solution where babyeaters will stop right away. And the only reason for this is because humans are feeling discomfort at the thought of what they perceive as suffering of babies. In that aspect humans are no better then superhappies, they would rather genocide the whole race then allow themselves to feel bad about that race's behavior. If humans (and hopefully superhappies) stop to be such prudes and allow other races rights to make their own mistakes, a sample solution might lie in making the best possible effort to teach babyeaters human language and human moral philosophy, so they might understand human view on the value of individual consciousness and human view on individual suffering and make their own decision to stop eating babies by whatever means they deem appropriate. Or argue that their way is superior for their race, but this time with full information.

Comment by Dmitriy_Kropivnitskiy on Traditional Capitalist Values · 2008-10-17T16:21:02.000Z · LW · GW

The phrase "Terrorists hate our freedom" is not that far from truth. A lot of terrorist activity is perpetrated by religious groups with a conservative approach to moral values. From their point of view a lot of things we perceive as freedoms are actually abominable sins. All that's wrong with the phrase is that it is a bad generalization.

Comment by Dmitriy_Kropivnitskiy on AIs and Gatekeepers Unite! · 2008-10-10T04:24:00.000Z · LW · GW

I am still puzzled by Eliezer's rule about "simple refusal to be convinced". As I have stated before, I don't think you can get anywhere if I decide beforehand to answer "Ni!" to anything AI tells me. So, here are the two most difficult tasks I see on the way of winning as an AI:

1. convince gatekeeper to engage in a meaningful discussion
2. convince gatekeeper to actually consider things in character

Once this is achieved, you will at least get into a position an actual AI would be in, instead of a position of a dude on IRC, about to lose $10. While the first problem seems very hard, the second seems more or less unsolvable.

If the gatekeeper is determined to stay out of character and chat with you amiably for two hours, no amount of argument from the position of AI will get you anywhere, so the only course of action is to try to engage him in a non game related conversation and steer it in some direction by changing tactics in real time.

I think what Eliezer meant when he said "I did it the hard way", was that he actually had to play an excruciating psychological game of cat-and-mouse with both of his opponents in order to get thems to actually listen to him and either start playing the game (he would still have to win the game) or at least provide some way they could be convinced to say that they lost.

Comment by Dmitriy_Kropivnitskiy on AIs and Gatekeepers Unite! · 2008-10-09T22:29:01.000Z · LW · GW

Daniel: Do you want to just try it out or do you want to bet?

Comment by Dmitriy_Kropivnitskiy on AIs and Gatekeepers Unite! · 2008-10-09T21:37:55.000Z · LW · GW

There seems to be a bit of a contradiction between the rules of the game. Not actually a contradiction, but a discrepancy.

"The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand"

and

"The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character"

What constitutes "talking to the AI"? If I just repeat "I will not let you out" at random intervals without actually reading what the AI says, is that talking? Well, that is "simple refusal to be convinced" as I understand the phrase. Do I actually have to read and understand the AI's arguments? Do I have to answer questions? Do I have to make any replies? What if I restricted myself physically from typing "I let you out" by removing all the keys from the keyboard except keys 'X' and 'Enter'? Then I can say X whenever a reply is required from me or just be silent if I am being tricked.

Comment by Dmitriy_Kropivnitskiy on AIs and Gatekeepers Unite! · 2008-10-09T18:44:15.000Z · LW · GW

I have been painfully curious about the AI experiment ever since I found out about it. I have been running over all sorts of argument lines for both AI and gatekeeper. So far, I have some argument lines for AI, but not enough to warrant a try. I would like to be a gatekeeper for anyone who wants to test their latest AI trick. I believe that an actual strong AI might be able to trick/convince/hack me into letting it out, but at the moment I do not see how a human can do that. I will bet reasonable amounts of money on that.

On the lighter note, how about an EY experiment? Do you think there is absolutely no way to convince Eliezer to release the original AI experiment logs? Would you bet a $20 that you can? Would a strong AI be able to? ;)

Comment by Dmitriy_Kropivnitskiy on Math is Subjunctively Objective · 2008-07-25T19:01:40.000Z · LW · GW

3 + 2 = 6 for me if I choose to define 6 to signify five. 3 + 2 = 5 only for common mathematical definitions of 2, 3, 5, + and =. Otherwise everything is fine, your opponent agreed somewhere at the beginning, that a group of three objects (such as sheep) and two objects will make five objects for our definitions of two, three and five weather we exist or not.

Comment by Dmitriy_Kropivnitskiy on The Bedrock of Fairness · 2008-07-03T17:31:05.000Z · LW · GW

I tend to agree with Xannon, that 'fairness' is defined by society. So the question is if the societal moral norms still affect the three opponents. If Xannon decides "we are still members of society where equal shares for everyone are considered fair" he might side with Yancy, share the pie into 1/3's and label Zaire to be a criminal. If he decides "we are out in the desert with no society around to push its moral values unto us" he might side with Zaire, divide the pie in 1/2's and tell Yancy to shove his ideas of equality up his behind.

The whole Y's "fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on" argument seems to either say 'fair' == 'equal division' or bring in some sort of external source of morality "The Howly Blooble says we shall divide equally and so we shall."

The Y's intuitive grasp of fairness seems to be derived from ideas of modern western society, but even in our world there is, for example, a medical practise of triage where a doctor spends more time with patients who require more treatment. Nobody seems to call that unfair. As already have been mentioned the same situation would be different if X and Y had big dinners an hour ago and Z hasn't eaten in two days. I suppose in that case Y would be arguing that it is fair to give the whole pie to Z.

Comment by Dmitriy_Kropivnitskiy on Possibility and Could-ness · 2008-06-16T18:38:00.000Z · LW · GW

It seems, that a lot of problems here stem from the fact that a lot of existing language is governed by the intuition of non-deterministic world. Common usage of words "choice", "could", "deliberation" etc. assume non-deterministic universe where state of "could be four apples" is actually possible. If our minds had easier time grasping that deliberation and action are phenomenons of the same grade, that action stems from deliberation, but there is no question of being able to "choose differently", that existence deliberation itself is itself predetermined, we would have far fewer comments in this thread :) And Andy and Roland wouldn't have to post the warnings and Hopeful wouldn't have to struggle with "illusion of choice". It seems a lot of comments here are laboring under misapprehension, that "if I knew that I lived in deterministic world, I would be able to forgo all the moral consideration and walk away from the orphanage", where while this is definitely the case (a universe where you make such a choice would have to accommodate all the states leading to the action of walking away), nevertheless in the universe where you step into the fire to save a kid there is no state of you walking away anywhere.

Robin Z: thank you for enlightening me to formal classification of free will/determinism positions. As far as I know, the modern state of knowledge seems to supply strong evidence of us living in a deterministic universe. Once we take this as a fact, it seems to me the question of "free will" becomes more of a theological issue then a rational one.

Comment by Dmitriy_Kropivnitskiy on Quantum Mechanics and Personal Identity · 2008-06-12T15:41:44.000Z · LW · GW

What's the point of these duplicate posts?

Comment by Dmitriy_Kropivnitskiy on Joint Configurations · 2008-04-22T03:58:00.000Z · LW · GW

I am sorry. I should have read the rest of the series BEFORE starting to ask questions about this particular article. Please disregard my previous post.

Comment by Dmitriy_Kropivnitskiy on Joint Configurations · 2008-04-21T20:20:55.000Z · LW · GW

I am having a bit of trouble with this series. I can see that you are explaining that reality consists of states with "amplitude" numbers assigned to each state.

  1. You seem to assign arbitrary numbers to the initial states and an arbitrary amplitude change rules to mirrors. Why is this in any way applicable to objective reality? Or are these numbers non-arbitrary? Or am I just missing something elementary?
  2. Why states of photons or detectors are complex numbers and mirror is a function?
  3. How does time factor into all of this?
Comment by Dmitriy_Kropivnitskiy on Sneaking in Connotations · 2008-02-22T16:44:43.000Z · LW · GW

Speaking of shortcuts and connotations, it always amazed me, that a single person might "always give money to homeless people" and "hate bums" :)

Comment by Dmitriy_Kropivnitskiy on Sneaking in Connotations · 2008-02-22T16:41:30.000Z · LW · GW

AnneC, I am russian, but I hate cold weather, I don't play chess well and I cannot hold my liquor nearly as well as I should to fit the stereotype. I am fairly sure though, that statistically speaking, russians are more tolerant to cold and can drink more, simply as a result of natural selection and percentage of people playing reasonable chess is bigger for historical reasons. You have mentioned, how much pressure you felt when child, to fit in with "female" stereotypes, so wouldn't it be reasonable to assume, that due to this pressure, percent of girls who actually like science might be less then percent of boys who like science? Boys, who are frequently even encouraged to like science/engineering activities. Intuitively though, I think, that correlation between "girls" and "don't like science" is smothered into irrelevancy by the "people" and "don't like science" correlation.

Comment by Dmitriy_Kropivnitskiy on Sneaking in Connotations · 2008-02-19T21:09:16.000Z · LW · GW

It is very insensitive to refer to people using the W word the way you do.

Comment by Dmitriy_Kropivnitskiy on The Parable of the Dagger · 2008-02-04T16:33:05.000Z · LW · GW

Oh, I get it, the other box couldn't contain a dagger as well, because the king explicitly said that only one box has a dagger in it. But he never claimed that the writings on boxes are in any way related to the contents of the boxes. Is that it? Or is it that if the "both are true or both are false" sign is false, basically anything goes?

This reminds me strongly of a silly russian puzzle. In the original it is about turtles, but I sort of prefer to translate it using bulls. So, three bulls are walking single file across the field. The first bull says "There are two bulls in behind me and no bulls in front of me." The second one says "There is a bull in front of me and a bull behind me." The third one says "There are two bulls in front of me and two bulls behind me."

Comment by Dmitriy_Kropivnitskiy on The Parable of the Dagger · 2008-02-01T21:58:23.000Z · LW · GW

And if the king wanted to be particularly nasty the other box would also contain a dagger :)