Meetup : Kyiv: 14 January 2017 18:15 (+0200) 2017-01-12T17:05:53.819Z
Meetup : Kyiv: 29 December 2016 18:15 (+0200) 2016-12-22T22:24:07.989Z
Meetup : Kyiv: 21 December 2016 18:15 (+0200) 2016-12-20T19:36:13.614Z


Comment by networked on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-11-05T21:06:26.907Z · LW · GW

I like "the VCR disagreement". It sounds nice and evokes visual associations (a courtroom, a VCR) that might help one remember it. Since no alternative has been found or proposed I will start using this term for the phenomenon.

On a related note, I wonder if there is a search system that matches vague descriptions of phenomena to (existing) definitions better than Google. (Googling "a search system that matches vague descriptions of phenomena to existing definitions" didn't yield any interesting results.)

Comment by networked on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-10-01T18:45:44.843Z · LW · GW

I found that variations on the following exchange are very common on programming forums:

Alice: Programming language feature X is misused more often than not. It's bad.

Bob: Every language feature can be misused. That does not make it bad.

Suppose Alice is correct on the statistics: most code that uses feature X uses it in a way that Alice and Bob would both agree to be wrong. Suppose Bob still disagrees with her over it making the feature bad. He disagrees not because he thinks the good uses outweigh the bad ones but because it is possible, in principle, to only use feature X the right way. Is there a specific name for their kind of disagreement?

Comment by networked on Open Thread, September 23-29, 2013 · 2013-09-26T12:34:51.898Z · LW · GW

Less Wrong and its comments are a treasure trove of ethical problems, both theoretical and practical, and possible solutions to them (the largest one to my knowledge; do let me know if you are aware of a larger forum for this topic). However, this knowledge is not easy to navigate, especially to an outsider who might have a practical interest in it. I think this is a problem worth solving and one possible solution I came up with is to create a StackExchange-style service for (utilitarian, rationalist) ethics. Would you consider such a platform for ethical questions to be useful? Would you participate?

Possible benefits:

  1. Making existing problems and their answers easier to navigate through the use of tagging and a stricter question-answer format.

  2. Accumulation of new interesting problems.

The closest I have found is, which doesn't appear to be very active and it being a part of a more traditional philosophy forum might be a hindrance.

Edit: a semi-relevant example.

Comment by networked on A map of Bay Area memespace · 2013-09-26T12:05:28.619Z · LW · GW

Am I correct in assuming the color you chose for the "Startup culture" block is a reference to Y Combinator or is this a coincidence?

Comment by networked on The Mystery of the Haunted Rationalist · 2013-06-17T17:13:56.624Z · LW · GW


Comment by networked on The Mystery of the Haunted Rationalist · 2013-06-17T16:08:30.835Z · LW · GW

It's very rare that a precommitment to holding a belief in spite of the evidence is the best way to investigate a topic related to that belief.

What valid very rare cases of this do you think of? One that comes to my mind is that at least might be valid is that of "faking it 'till you make it". People precommit to holding a belief for a while in spite of the evidence in order to improve qualities that depend on them holding that belief. Unfortunately, I wasn't able to find conclusive studies on whether "faking it 'till you make it" works for psychological improvement in general with a quick search. The closest that I have found is a study linked to from the Wikipedia page on the placebo effect that shows a specific case in which placebo is shown to work without deception on the doctors' part (i.e., they tell patients it's a placebo).

In my hypothetical I imagine belief in ghosts to be a belief that is very harmful conditional on it being true and you holding it. Can you suggest a better way to investigate it or do you think that it isn't a valid very rare case even though no such way exists?

(Also, in case this played some role in why my original comment got downvoted, I would like to clarify to I do not consciously hold the belief that "ghosts will try to scare you to death" or other beliefs contingent on the existence of ghosts in reality. My real belief is probably closer to "Entities that are only able to interact with people's minds make for interesting fictional scenarios.")

Edit: fixed links. Edit 2: spelling and extra ).

Comment by networked on The Mystery of the Haunted Rationalist · 2013-06-15T12:55:00.282Z · LW · GW

If I believed in ghosts and wanted to investigate a haunted mansion I think it would be in my best interest to persuade myself temporarily to not believe in ghosts for the length of the investigation. In fact, I'd benefit only if I turned myself into a Narnia-style sceptic about them (one who wouldn't believe or alieve in them in spite of the evidence). Given what I think are the rules of how ghosts are supposed to work [1] I would assume that a ghost out to kill me couldn't do so by physical violence (e.g., lifting and throwing a kitchen knife -- unless it was a poltergeist) but would instead try to scare me to death (e.g., by causing a heart attack, a human factor accident or suicide). Armed with that knowledge I'd want persuade myself to reject the belief in ghosts on a gut level so as to lower my fear and prevent death. However, by doing so I would also lose my incentive to investigate haunted mansions in the first place, so I_{pre-ghost sceptic} would have to find a way to precommit to the investigation and to revert back to having a belief in ghosts afterwards. A monetary bet could work for the former.

[1] As absorbed by me from popular culture. Admittedly, if I did believe in ghosts would probably have followed ghost lore and would now have a somewhat different set of assumptions about "ghost rules".