Posts

Comments

Comment by shware on Eliezer Yudkowsky Facts · 2014-07-19T14:14:04.668Z · LW · GW

I feel this should not be in featured posts, as amusing as it was at the time

Comment by shware on Open thread, 9-15 June 2014 · 2014-06-09T18:52:57.074Z · LW · GW

For example, someone who was completely colorblind from birth could never understand what it felt like to see the color green, no matter how much neuroscience that person knew, i.e., you could never convey the sensation of "green" through a layout of a connectome or listing wavelengths of light.

The 'colorblind-synesthete'?

Comment by shware on Curiosity: Why did you mega-downvote "AI is Software" ? · 2014-06-05T22:46:05.976Z · LW · GW

I didn't have a problem with 1 or 2 but 3 and 4 were the big problems. Though I didn't downvote because it was already well negative at that point. Saying AI is software is an assertion but its not meaningful. Are you saying software that prints 'hello world' is intelligent? From some of your previous comments I gather you are interested in how software, the user, the designer and other software interact in some way but there was none of that in the post. Its as if Eliezer had said 'rationality IS winning IS rationality' as the entirety of the sequences.

Comment by shware on Open Thread, May 26 - June 1, 2014 · 2014-05-27T17:25:18.120Z · LW · GW

"Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium." --Scott Aaronson

Comment by shware on Self-Congratulatory Rationalism · 2014-03-01T20:16:41.633Z · LW · GW

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

Comment by shware on White Lies · 2014-02-08T17:53:03.100Z · LW · GW

I find it takes a great deal of luminosity in order to be honest with someone. If I am in a bad mood, I might feel that its my honest opinion that they are annoying when in fact what is going on in my brain has nothing to do with their actions. I might have been able to like the play in other circumstances, but was having a bad day so flaws I might have been otherwise able to overlook were magnified in my mind. etc.

This is my main fear with radical honesty, since it seems to promote thinking that negative thoughts are true just because they are negative. The reasoning going 'I would not say this if I were being polite, but I am thinking it, therefore it is true' without realizing that your brain can make your thoughts be more negative from the truth just as easily as it can make them more positive than the truth.

In fact, saying you enjoyed something you didnt enjoy, and signalling enjoyment with appropriate facial muscles (smiling etc) can improve your mood by itself, especially if it makes the other person smile.

Many intelligent people get lots of practice pointing out flaws, and it is possible that this trains the brain into a mode where one's first thoughts on a topic will be critical regardless of the 'true' reaction. If your brain automatically looks for flaws in something and then a friend asks your honest opinion you would tell them the flaws; but if you look for things to compliment your 'honest' opinion might be different.

tl;dr honesty is harder than many naively think, because our brains are not perfect reporters of their state, and even if they were good luck explaining your inner feelings about something across the inferential distance. Better to just adjust all your reactions slightly in the positive direction to reap the benefits of happier interactions (but only slightly, don't say you liked activities you loathed otherwise you'll be asked back, say they were ok but not your cup of tea etc)

Comment by shware on Even Odds · 2014-01-15T03:31:32.852Z · LW · GW

he puts 2.72 on the table, and you put 13.28 on the table.

I'm confused...if the prediction does not come true (which you estimated as being 33 percent likely) you only gain $2.72? and if the most probable outcome does come true you lose 13.28?

Comment by shware on Welcome to Less Wrong! (5th thread, March 2013) · 2013-05-15T05:47:21.178Z · LW · GW

An always open mind never closes on anything. There is a time to confess your ignorance and a time to relinquish your ignorance and all that...

Comment by shware on Decision Theory FAQ · 2013-05-15T05:22:06.869Z · LW · GW

Well, yes, obviously the classical paperclipper doesn't have any qualia, but I was replying to a comment wherein it was argued that any agent on discovering the pain-of-torture qualia in another agent would revise its own utility function in order to prevent torture from happening. It seems to me that this argument proves too much in that if it were true then if I discovered an agent with paperclips-are-wonderful qualia and I "fully understood" those experiences I would likewise be compelled to create paperclips.

Comment by shware on Open Thread, May 1-14, 2013 · 2013-05-14T03:20:12.222Z · LW · GW

By signing up for cryonics you help make cryonics more normal and less expensive, encouraging others to save their own lives. I believe there was a post where someone said they signed up for cryonics so that they wouldn't have to answer the "why aren't you signed up then?" crowd when trying to convince other people to do so.

Comment by shware on Decision Theory FAQ · 2013-03-13T21:11:23.476Z · LW · GW

Anyone who is isn't profoundly disturbed by torture, for instance, or by agony so bad one would end the world to stop the horror, simply hasn't understood it.

Similarly, anyone who doesn't want to maximize paperclips simply hasn't understood the ineffable appeal of paperclipping.

Comment by shware on New censorship: against hypothetical violence against identifiable people · 2012-12-25T03:33:14.183Z · LW · GW

Taking this post in the way it was intended i.e. 'are there any reasons why such a policy would make people more likely to attribute violent intent to LW' I can think of one:

The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: 'LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world'

And, of course, statements like 'if a proposed conspiratorial crime were in fact good you shouldn't talk about it on the internet' make for good out-of-context excerpts.