Open Thread: July 2010, Part 2

post by Alicorn · 2010-07-09T06:54:41.087Z · LW · GW · Legacy · 768 comments

Contents

768 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.


July Part 1

768 comments

Comments sorted by top scores.

comment by orthonormal · 2010-07-09T14:32:55.675Z · LW(p) · GW(p)

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

Replies from: Roko, Unnamed, Roko, JoshuaZ
comment by Roko · 2010-07-09T14:54:35.239Z · LW(p) · GW(p)

the ideal gas laws aren't "emergent"

No, I want my definition of "emergent" to say that the ideal gas laws are emergent properties of molecules.

Why not just say

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description

Replies from: Liron, orthonormal
comment by Liron · 2010-07-09T16:43:47.357Z · LW(p) · GW(p)

The high-level structure shouldn't be the same as the low level structure, because I don't want to say a pile of sand emerges from grains of sand.

comment by orthonormal · 2010-07-09T15:44:42.960Z · LW(p) · GW(p)

ISTM that the actual present usage of "emergent" is actually pretty well-defined as a cluster, and it doesn't include the ideal gas laws. I'm offering a candidate way to cash-out that usage without committing the Mind Projection Fallacy.

Replies from: Blueberry, nhamann
comment by Blueberry · 2010-07-09T16:12:52.169Z · LW(p) · GW(p)

The fallacy here is thinking there's a difference between the way the ideal gas laws emerge from particle physics, and the way intelligence emerges from neurons and neurotransmitters. I've only heard "emergent" used in the following way:

A system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, and the high-level description is not easily predictable from the low-level description

For instance, gliders moving across the screen diagonally is emergent in Conway's Life.

The "easily predictable" part is what makes emergence in the map, not the territory.

Replies from: orthonormal
comment by orthonormal · 2010-07-09T20:55:14.868Z · LW(p) · GW(p)

Er, did you read the grandparent comment?

Replies from: Blueberry
comment by Blueberry · 2010-07-09T21:01:13.352Z · LW(p) · GW(p)

Yes. My point was that emergence isn't about what we know how to derive from lower-level descriptions, it's about what we can easily see and predict from lower-level descriptions. Like Roko, I want my definition of emergence to include the ideal gas laws (and I haven't heard the word used to exclude them).

Also see this comment.

comment by nhamann · 2010-07-09T16:30:53.839Z · LW(p) · GW(p)

For what it's worth, Cosma Shalizi's notebook page on emergence has a very reasonable discussion of emergence, and he actually mentions macro-level properties of gas as a form of "weak" emergence:

The weakest sense [i]s also the most obvious. An emergent property is one which arises from the interaction of "lower-level" entities, none of which show it. No reductionism worth bothering with would be upset by this. The volume of a gas, or its pressure or temperature, even the number of molecules in the gas, are not properties of any individual molecule, though they depend on the properties of those individuals, and are entirely explicable from them; indeed, predictable well in advance.

To define emergence as it is normally used, he adds the criterion that "the new property could not be predicted from a knowledge of the lower-level properties," which looks to be exactly the definition you've chosen here (sans map/territory terminology).

Replies from: Morendil
comment by Morendil · 2010-07-09T17:36:42.690Z · LW(p) · GW(p)

Let's talk examples. One of my favorite examples to think about is Langton's Ant.

If we taboo "emergence" what do we think is going on with Langton's Ant?

Replies from: nhamann
comment by nhamann · 2010-07-09T19:40:19.532Z · LW(p) · GW(p)

If we taboo "emergence" what do we think is going on with Langton's Ant?

We have one description of the ant/grid system in Langton's Ant: namely, the rules which totally govern the behavior of the system. We have another description of the system, however: the recurring "highway" pattern that apparently results from every initial configuration tested. These two descriptions seem to be connected, but we're not entirely sure how (The only explanation we have is akin to this: Q: Why does every initial configuration eventually result in the highway pattern? A: The rules did it.) That is, we have a gap in our map.

Since the rules, which we understand fairly well, seem on some intuitive sense to be at a "lower level" of description than the pattern we observe, and since the pattern seems to depend on the "low-level" rules in some way we can't describe, some people call this gap "emergence."

Replies from: apophenia
comment by apophenia · 2010-07-10T22:16:02.685Z · LW(p) · GW(p)

I recall hearing, although I can't find a link, that the Langton Ant problem has been solved recently. That is, someone has given a formal proof that every ant results in the highway pattern.

comment by Unnamed · 2010-07-10T17:50:18.072Z · LW(p) · GW(p)

It's worth checking on the Stanford Encyclopedia of Philosophy when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream.

The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower."

There's more there, some of which seems sensible and some of which I don't understand.

Replies from: orthonormal
comment by orthonormal · 2010-07-10T23:55:39.458Z · LW(p) · GW(p)

Many thanks!

comment by Roko · 2010-07-09T23:55:25.192Z · LW(p) · GW(p)

It seems problematic that as soon as you work out how to derive high-level behavior from low-level behavior, you have to stop calling it emergent. It seems even more problematic that two people can look at the same phenomenon and disagree on whether it's "emergent" or not, because Bob knows the relevant derivation of high level behavior from low level behavior, but Alice doesn't, even if Alice nows that Bob knows.

Perhaps we could refine this a little, and make emergence less subjective, but still avoid mind-projection-fallacy.

We say that a system X has emergent behavior if there exists an exact and simple low-level description and an inexact but easy-to-compute high-level description, and the derivation of the high-level laws from the low-level ones is much more complex than either. [In the technical sense of kolmogorov complexity] (Like "Has chaotic dynamics", it is a property of a system)

Replies from: orthonormal, None
comment by orthonormal · 2010-07-10T00:17:18.230Z · LW(p) · GW(p)

I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign to a connotation of unsolved problem.

comment by [deleted] · 2010-07-10T11:36:54.737Z · LW(p) · GW(p)

By your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi?

I may not understand the work that "low-level" and "high-level" are doing in this discussion.

On the length of derivations, here are some relevant Godel cliches: System X (for instance, arithmetic) often obeys laws that are underivable. And it often obeys derivable laws of length n whose shortest derivation has length busy-beaver-of-n.

(Uber die lange von Beiwessen is the title of a famous short Godel paper. He revisits the topic in a famous letter to von Neumann, available here: http://rjlipton.wordpress.com/the-gdel-letter/)

Replies from: apophenia, Kingreaper, Roko
comment by apophenia · 2010-07-10T22:11:27.275Z · LW(p) · GW(p)

Just a pedantic note: pi has not been proven normal. Maybe one fifth of the digits are 1s.

Replies from: None
comment by [deleted] · 2010-07-10T23:56:15.905Z · LW(p) · GW(p)

I'll stick to it. It's easier to perform experiments than it is to give mathematical proofs. If experiments can give strong evidence for anything (I hope they can!), then this data can give strong evidence that pi is normal: http://www.piworld.de/pi-statistics/pist_sdico.htm

Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.

Replies from: wedrifid
comment by wedrifid · 2010-07-11T01:48:56.475Z · LW(p) · GW(p)

Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.

The later case seems more likely to me.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-11T15:21:23.446Z · LW(p) · GW(p)

I was just thinking about the latter case, actually. If g equalled G (m1 ^ (1 + (10 ^ -30)) (m2 ^ (1 + (10 ^ -30))) / (r ^ 2), would we know about it?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-11T15:45:28.244Z · LW(p) · GW(p)

Well, the force of gravity isn't exactly what you get from Newton's laws anyways (although most of the easily detectable differences like that in the orbit of Mercury are better thought of as due to relativity's effect on time than a change in g). I'm not actually sure how gravitational force could be non-additive with respect to mass. One would have the problem of then deciding what constitutes a single object. A macroscopic object isn't a single object in any sense useful to physics. Would for example this calculate the gravity of Earth as a large collection of particles or as all of them together?

But the basic point, that there could be weird small errors in our understanding of the laws of physics is always an issue. To use a slightly more plausible example, if say the force of gravity on baryons is slightly stronger than that on leptons (slightly different values of G) we'd be unlikely to notice. I don't think we'd notice even if it were in the 2nd or 3rd decimal of G (partially because G is such a very hard constant to measure.)

comment by Kingreaper · 2010-07-11T02:33:37.406Z · LW(p) · GW(p)

By your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi?

IMO, that would be emergent behaviour of mathematics, rather than of pi.

Pi isn't a system in itself as far as I can see.

Replies from: None
comment by [deleted] · 2010-07-11T02:50:44.524Z · LW(p) · GW(p)

I have in mind a system, for instance a computer program, that computes pi digit-by-digit. There are features of such a computer program that you can notice from its output, but not (so far as anyone knows) from its code, like the frequency of 1s.

comment by Roko · 2010-07-11T12:12:51.638Z · LW(p) · GW(p)

If you had some physical system that computed digit frequencies of Pi, I'd definitely want to call the fact that the fractions were very close to 1 emergent behavior.

Does anyone disagree?

Replies from: wedrifid
comment by wedrifid · 2010-07-11T15:39:39.347Z · LW(p) · GW(p)

I can't disagree about what you want but I myself don't really see the point in using the word emergent for a straightforward property of irrational numbers. I wouldn't go so far as to say the term is useless but whatever use it could have would need to describe something more complex properties that are caused by simpler rules.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-11T16:00:05.491Z · LW(p) · GW(p)

This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property. In fact, any random real number will have this property with probability 1 (rational numbers have measure 0 since they form a countable set). This is pretty easy to prove if one is familiar with Lebesque measure.

There are irrational numbers which do not share this property. For example, .101001000100001000001... is irrational and does not share this property.

Replies from: wedrifid, jimrandomh
comment by wedrifid · 2010-07-11T16:54:44.324Z · LW(p) · GW(p)

This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property.

True enough. it would seem that irrational number is not the correct term for the set I refer to.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-11T16:59:32.121Z · LW(p) · GW(p)

The property you are looking for is normalness to base 10. See normal number.

ETA: Actually, you want simple normalness to base 10 which is slighly weaker.

comment by jimrandomh · 2010-07-11T17:04:18.936Z · LW(p) · GW(p)

This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property.

Any irrational number drawn from what distribution? There are plenty of distributions that you could draw irrational numbers from which do not have this property, and which contain the same number of numbers in them. For example, the set of all irrational numbers in which every other digit is zero has the same cardinality as the set of all irrational numbers.

Replies from: Roko
comment by Roko · 2010-07-11T17:26:53.605Z · LW(p) · GW(p)

I'm presuming he's talking about measure, using the standard Lebesgue measure on R

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-11T17:45:19.926Z · LW(p) · GW(p)

Yes, although generally when asking these sorts of questions one looks at the standard Lebesque measure on [0,1] or [0,1) since that's easier to normalize. I've been told that this result also holds for any bell-curve distribution centered at 0 but I haven't seen a proof of that and it isn't at all obvious to me how to construct one.

Replies from: orthonormal
comment by orthonormal · 2010-07-11T23:47:15.904Z · LW(p) · GW(p)

Well, the quick way is to note that the bell-curve measure is absolutely continuous with respect to Lebesgue measure, as is any other measure given by an integrable distribution function on the real line. (If you want, you can do this by hand as well, comparing the probability of a small bounded open set in the bell curve distribution with its Lebesgue measure, taking limits, and then removing the condition of boundedness.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-12T00:00:53.674Z · LW(p) · GW(p)

Excellent, yes that does work. Thanks very much!

comment by JoshuaZ · 2010-07-09T14:38:59.304Z · LW(p) · GW(p)

The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.

Replies from: orthonormal
comment by orthonormal · 2010-07-09T15:39:54.335Z · LW(p) · GW(p)

I agree. But given that outsiders use the term all the time, and given that they can point to a reasonably large cluster of things (which are adequately contained in the definition I offered), it might be more helpful to say that emergence is a statement of a known unknown (in particular, a missing reduction between levels) than to refuse to use the term entirely, which can appear to be ignoring phenomena.

comment by SilasBarta · 2010-07-28T19:08:02.553Z · LW(p) · GW(p)

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.

Replies from: Roko, Eliezer_Yudkowsky, JamesAndrix
comment by Roko · 2010-07-28T20:01:29.665Z · LW(p) · GW(p)

I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

Replies from: Vladimir_Nesov, Clippy, JoshuaZ, rhollerith_dot_com
comment by Vladimir_Nesov · 2010-07-28T22:38:29.052Z · LW(p) · GW(p)

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

Replies from: Blueberry, None, cousin_it, rhollerith_dot_com, SforSingularity
comment by Blueberry · 2010-07-29T10:36:06.493Z · LW(p) · GW(p)

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

Replies from: katydee
comment by katydee · 2010-07-29T11:27:28.522Z · LW(p) · GW(p)

I understand why you might be angry, but please think of the scale involved here. If any particular post or comment increases the chance of an AI going wrong by one trillionth of a percent, it is almost certainly not worth it.

Replies from: None, XiXiDu, XiXiDu
comment by [deleted] · 2010-07-29T13:23:58.256Z · LW(p) · GW(p)

This is silly - there's simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.

For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.

From what I understand, the actual banning was due to it's likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list - which I won't comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

Replies from: fortyeridania, JamesAndrix, katydee, ocr-fork
comment by fortyeridania · 2010-12-10T15:37:25.802Z · LW(p) · GW(p)

Thank you for pointing out the difficulty of quantifying existential risks posed blog posts.

The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent.

(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one--just reverse the sign, and you'll be right on average.)

Replies from: TheOtherDave, jimrandomh
comment by TheOtherDave · 2010-12-10T15:44:48.228Z · LW(p) · GW(p)

(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one--just reverse the sign, and you'll be right on average.)

That makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.

Replies from: fortyeridania
comment by fortyeridania · 2010-12-11T03:47:29.087Z · LW(p) · GW(p)

Are you saying: (1) A bad post is less likely to be read because it will not be promoted and it will be downvoted; (2) Because bad posts are less read, they have a smaller cost than good posts' benefits?

I think I agree with that. I had not considered karma and promotion, which behave like advertisements in their informational value, when making that comment.

But I think that what you're saying only strengthens the case against moderators' deleting posts against the poster's will because it renders the objectionable material less objectionable.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-11T04:18:30.837Z · LW(p) · GW(p)

Yes, that's what I'm saying.

And I'm not attempting to weaken or strengthen the case against anything in particular.

comment by jimrandomh · 2010-12-10T15:46:44.660Z · LW(p) · GW(p)

(Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one--just reverse the sign, and you'll be right on average.)

Huh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.

Replies from: fortyeridania
comment by fortyeridania · 2010-12-11T03:48:22.259Z · LW(p) · GW(p)

You are right to say that it's possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you've cited are apt.

However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people's minds for the better) and to have barely beneficial ones.

Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that's the kind of reason that would topple what I've said.

One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign?

I'm sorry if I've misunderstood your criticism. If I have, please give me another chance.

comment by JamesAndrix · 2010-08-06T08:17:15.846Z · LW(p) · GW(p)

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough.

Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.)

Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.

Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

comment by katydee · 2010-07-29T20:01:28.015Z · LW(p) · GW(p)

Sorry, forgot that not everyone saw the thread in question. Eliezer replied to the original post and explicitly said that it was dangerous and should not be published. I am willing to take his word for it, as he knows far more about AI than I.

Replies from: murat
comment by murat · 2010-08-01T01:09:56.408Z · LW(p) · GW(p)

I have not seen the original post, but can't someone simply post it somewhere else? Is deleting it from here really a solution (assuming there's real danger)? BTW, I can't really see how a post on a board can be dangerous in a way implied here.

Replies from: AngryParsley, thomblake
comment by AngryParsley · 2010-08-01T04:08:11.993Z · LW(p) · GW(p)

The likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.

Replies from: JoshuaZ, timtyler
comment by JoshuaZ · 2010-08-01T04:46:39.556Z · LW(p) · GW(p)

I read the article, and it struck me as dangerous. I'm going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.

comment by timtyler · 2010-09-09T20:25:46.606Z · LW(p) · GW(p)

If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy.

In fact, there are hundreds of deleted articles on LW. The community is small enough for it to be manually policed - it seems.

comment by thomblake · 2010-08-13T16:35:33.729Z · LW(p) · GW(p)

I have not seen the original post, but can't someone simply post it somewhere else?

Sadly, those who saw the original post have declined to share.

comment by ocr-fork · 2010-07-29T16:43:10.668Z · LW(p) · GW(p)

But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

I've read the post. That excuse is actually relevant.

comment by XiXiDu · 2010-08-05T09:56:27.329Z · LW(p) · GW(p)

Something really crazy is going on here.

You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, "it could increase the chance of an AI going wrong...".

"I deleted my comment because it was maybe going to increase the chance of an AI going wrong..."

"Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid..."

"Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong."

I'm beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.

Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.

Replies from: Blueberry, XiXiDu, katydee, Kevin
comment by Blueberry · 2010-08-05T10:46:58.981Z · LW(p) · GW(p)

This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-05T11:09:20.728Z · LW(p) · GW(p)

The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?

If you read this comment thread you'll see what I mean and what danger there might be posed by this movement, 'follow Eliezer', 'donating as much as possible to SIAI', 'kill a whole planet', 'afford to leave one planet's worth', 'maybe we could even afford to leave their brains unmodified'...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.

Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.

Replies from: Blueberry
comment by Blueberry · 2010-08-06T09:59:19.537Z · LW(p) · GW(p)

I'm not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don't think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don't see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.

As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn't be taken to reflect anything about anyone else.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T10:40:08.573Z · LW(p) · GW(p)

I know, it wasn't my intention to discredit Peer, I quite like his ideas. I'm probably more crazy than him anyway.

But if I can come up with such conclusions, who else will? Also, why isn't anyone out to kill people, or will be? I'm serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon 'mere' probability estimates, how wouldn't it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn't this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.

This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I'm wondering about the possible reaction to a imminent and tangible danger.

comment by XiXiDu · 2010-08-05T14:03:25.861Z · LW(p) · GW(p)

Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.

Is active denial of information an appropriate handling of serious personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I'm sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I'm advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I'd go all the way and advocate the depiction of cruelty to children.

A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.

comment by katydee · 2010-08-06T02:08:17.911Z · LW(p) · GW(p)

Maybe you should read the comments in question before you make this sort of post?

comment by Kevin · 2010-08-05T11:03:35.268Z · LW(p) · GW(p)

This really isn't worth arguing and there isn't any reason to be angry...

Replies from: wedrifid, XiXiDu
comment by wedrifid · 2010-08-05T12:41:11.684Z · LW(p) · GW(p)

This really isn't worth arguing and there isn't any reason to be angry...

You are wrong on both. There is strong signalling going on that gives good evidence regarding both Eliezer's intent and his competence.

What Roko said matters little, what Eliezer said (and did) matters far more. He is the one trying to take over the world.

comment by XiXiDu · 2010-08-05T11:28:13.805Z · LW(p) · GW(p)

I don't consider frogs to be objects of moral worth. -- Eliezer Yudkowsky

Yeah ok, frogs...but wait! This is the person who's going to design the moral seed of our coming god-emperor. I'm not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who'll decide upon what's reasonable to say in public and what should be deleted?

Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.

Replies from: wedrifid, Kevin, jimrandomh
comment by wedrifid · 2010-08-05T12:42:02.416Z · LW(p) · GW(p)

There are some good questions buried in there that may be worth discussing in more detail at some point.

comment by Kevin · 2010-08-05T11:41:17.545Z · LW(p) · GW(p)

I am vaguely confused by your question and am going to stop having this discussion.

comment by jimrandomh · 2010-08-05T12:30:42.470Z · LW(p) · GW(p)

Before getting angry, it's always a good idea to check whether you're confused. And you are.

comment by XiXiDu · 2010-08-04T19:23:07.905Z · LW(p) · GW(p)

Something really crazy is going on here.

You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, "it could increase the chance of an AI going wrong...".

"I deleted my comment because it was maybe going to increase the chance of an AI going wrong..."

"Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid..."

"Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong."

I'm beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.

Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.

P.S. I haven't read Roko's post and comments yet but I got a backup of every single one of them. And not just Roko's, all of LW including EY deleted comments. Are you going to assassinate me now?

comment by [deleted] · 2010-07-29T05:37:07.177Z · LW(p) · GW(p)

It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-29T07:56:03.060Z · LW(p) · GW(p)

Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.

I didn't insult Roko. The decision, and justification given, seem wholly irrational to me (which is separate from claiming a right to demand that decision altered).

comment by cousin_it · 2010-07-29T09:11:13.315Z · LW(p) · GW(p)

It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result.

(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)

Replies from: bogus, wedrifid
comment by bogus · 2010-07-29T09:54:42.867Z · LW(p) · GW(p)

What "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others.

What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.

comment by wedrifid · 2010-09-25T07:17:57.358Z · LW(p) · GW(p)

This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.

I just noticed this. A brilliant disclaimer!

comment by RHollerith (rhollerith_dot_com) · 2010-07-29T00:11:27.372Z · LW(p) · GW(p)

Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Replies from: RobinZ
comment by RobinZ · 2010-07-29T03:30:50.042Z · LW(p) · GW(p)

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

Replies from: None
comment by [deleted] · 2010-08-18T02:36:24.223Z · LW(p) · GW(p)

Could the people who have such links post them here?

comment by SforSingularity · 2010-08-03T11:08:35.300Z · LW(p) · GW(p)

I don't really see what the fuss is. His articles and comments were mediocre at best.

comment by Clippy · 2010-07-28T23:32:11.284Z · LW(p) · GW(p)

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.

Replies from: jsalvatier
comment by jsalvatier · 2010-08-19T15:12:37.838Z · LW(p) · GW(p)

lol

comment by JoshuaZ · 2010-07-28T23:59:54.568Z · LW(p) · GW(p)

I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

Replies from: daedalus2u, EStokes
comment by daedalus2u · 2010-07-29T00:07:22.284Z · LW(p) · GW(p)

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

comment by EStokes · 2010-07-30T12:41:36.952Z · LW(p) · GW(p)

There was one post that could create harm.

FTFY

comment by RHollerith (rhollerith_dot_com) · 2010-07-29T00:38:10.507Z · LW(p) · GW(p)

Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:

I don't usually talk like this, but I'm going to make an exception for this case.

Listen to me very closely, you idiot.

[paragraph entirely in bolded caps.]

[four paragraphs of technical explanation.]

I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Although it does not IMHO make it praiseworthy, the above quote probably makes Roko's decision to mass delete his comments more understandable on an emotional level.

In defense of Eliezer, the occasion of Eliezer's comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.

If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.

EDIT: added the "I don't usually talk like this" paragraph to my quote in repsonse to criticism by Aleksei.

Replies from: cousin_it, Aleksei_Riikonen, SilasBarta, JoshuaZ
comment by cousin_it · 2010-07-29T09:06:32.219Z · LW(p) · GW(p)

I'm not them, but I'd very much like your comment to stay here and never be deleted.

Replies from: timtyler, None
comment by timtyler · 2010-09-09T20:27:16.832Z · LW(p) · GW(p)

I'd very much like your comment to stay here and never be deleted.

Your up-votes didn't help, it seems.

Replies from: cousin_it
comment by cousin_it · 2010-09-09T20:33:36.390Z · LW(p) · GW(p)

Woah.

Thanks for alerting me to this fact, Tim.

comment by [deleted] · 2010-07-29T14:05:23.246Z · LW(p) · GW(p)

Out of curiosity, what's the purpose of the banning? Is it really assumed that banning the post will mean it can't be found in the future via other means or is it effectively a punishment to discourage other people from taking similar actions in the future?

comment by Aleksei_Riikonen · 2010-07-30T13:18:15.423Z · LW(p) · GW(p)

Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.

comment by SilasBarta · 2010-07-29T01:19:02.466Z · LW(p) · GW(p)

I'm still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I'm not sure security by obscurity is such a wise strategy either...

comment by JoshuaZ · 2010-07-29T01:23:11.709Z · LW(p) · GW(p)

The major issue as I understand it wasn't the global utility problem but the issue that when Roko posted the comment he knew that some people were having nightmares about the scenario in question. Presumably increasing the set of people who are nervous wrecks is not good.

Replies from: EchoingHorror
comment by EchoingHorror · 2010-07-29T02:06:12.089Z · LW(p) · GW(p)

I was told it was something that, if thought about too much, would cause post-epic level problems. The nightmare aspect wasn't part of my concept of whatever it is until now.

I also get the feeling Eliezer wouldn't react as dramatically as an above synopsis implies unless it was a big deal (or hilarious to do so). He seems pretty ... rational, I think is the word. Despite his denial of being Quirrell in a parent post, a non-deliberate explosive rant and topic banning seems unlikely.

He also mentions that only a certain inappropriate post was banned, and Roko said he deleted his own posts himself. And yet the implication going around that it was all deleted as administrative action. A rumor started by Eliezer himself so he could deny being "evil," knowing some wouldn't believe him? Quirrell wouldn't do that, right? ;)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-28T19:40:45.520Z · LW(p) · GW(p)

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Replies from: Unnamed, whpearson, DanielVarga, JoshuaZ, thomblake
comment by Unnamed · 2010-07-28T19:54:15.891Z · LW(p) · GW(p)

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

comment by whpearson · 2010-07-28T19:43:56.639Z · LW(p) · GW(p)

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-28T20:31:10.984Z · LW(p) · GW(p)

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Replies from: RobinZ, wedrifid
comment by wedrifid · 2010-09-25T07:20:56.278Z · LW(p) · GW(p)

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Of course as you know very well. :)

comment by DanielVarga · 2010-07-30T04:48:55.729Z · LW(p) · GW(p)

A side effect of banning one post, I think;

In a certain sense, it is.

comment by JoshuaZ · 2010-07-29T00:22:54.534Z · LW(p) · GW(p)

Notice: I am not Professor Quirrell in real life.

Of course, we already established that you're Light Yagami.

comment by thomblake · 2010-08-02T14:36:19.393Z · LW(p) · GW(p)

I am not Professor Quirrell in real life.

I'm not sure we should believe you.

comment by JamesAndrix · 2010-08-06T07:55:06.037Z · LW(p) · GW(p)

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough.

Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.)

Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.

Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Replies from: Document
comment by Document · 2010-09-24T08:07:40.167Z · LW(p) · GW(p)

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot.

Technically, you didn't say "for now".

comment by Roko · 2010-07-14T11:58:31.022Z · LW(p) · GW(p)

Cryo-wives: A promising comment from the NYT Article:

As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests!

My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing.

Part of the strength of my negative response, I realized, had to do with the fact that my relationship with my own mortality was on shaky ground. I don't want to die. But I'm fairly certain I'm going to. Like many people, I've struggled to come to a place where I can accept the specter of my own death with some grace. Humbleness and acceptance in the face of death are valued very highly (albeit not always explicitly) in our culture. The companion, I think, to this humble acceptance of death is a humble (and painful) acceptance of our own personal lack of consequence. To rail against death; to grasp at the faintest of odds to avoid it; these behaviors seem to assert a brazen self interest, an arrogance, and fly in the face of the quiet, self-effacing acceptance the rest of us struggle for every day.

"I have worked so hard to abandon hope," my heart was saying. "Who are you to arrogantly seize it, as though that was even an option? Who are you to raise the terrible idea of hope, after I have worked so hard to convince myself there IS no hope?" There's this strange blend of fear and jealousy at work, I think, in the gut-punch reaction that many of us have to the idea of cryonics. And once you start unpacking this response and looking at it with clear eyes, it becomes obvious how selfish, how irrational, and unhelpful it is. So my husband has chosen to pursue an unlikely hope. How does that affect me? Can I seriously say to him, "you must abandon this hope, because for reasons that have everything to do with me and nothing to do with you, I find it icky"? If I did, I would be a terribly selfish person.

Ultimately, my struggle to come to terms with his decision has been more or less successful. Although I am not (and don't presently plan to be) enrolled in a cryonics program myself, although I still find the idea somewhat unsettling, I support his decision without question. If he dies before I do, I will do everything in my power to see that his wishes are complied with, as I expect him to see that mine are. Anything less than this, and I honestly don't think I could consider myself his partner.

Replies from: Blueberry
comment by Blueberry · 2010-07-14T16:26:44.313Z · LW(p) · GW(p)

That is really a beautiful comment.

It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

Replies from: Eliezer_Yudkowsky, ata
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-14T20:46:27.959Z · LW(p) · GW(p)

I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.

comment by ata · 2010-07-14T21:44:20.280Z · LW(p) · GW(p)

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

That's more comparable to being in a long coma with some uncertain possibility of waking up from it, so perhaps it could be reframed along those lines; some people probably do specify that they should be taken off of life support if they are found comatose, but to choose to be kept alive is not socially disapproved of, as far as I know.

comment by Bongo · 2010-07-10T21:30:05.155Z · LW(p) · GW(p)

Heard on #lesswrong:

BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and epistemological authority (he's the top expert, wrote the sequences which are considered canonical).

If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish

This suggests that we should try to make someone else a social authority so that he doesn't have to be.

(I hope posting only a log is ok)

Replies from: Will_Newsome, apophenia
comment by Will_Newsome · 2010-07-16T00:13:42.912Z · LW(p) · GW(p)

Hopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?

comment by apophenia · 2010-07-10T22:08:57.958Z · LW(p) · GW(p)

To me, the most appealing aspect of #lesswrong is that my comments will not be archived for posterity.

This is also an interesting quote.

Edit: I obviously missed the "only" in your note there.

comment by John-Henry · 2010-07-10T20:53:16.241Z · LW(p) · GW(p)

I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.

The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE

Replies from: None, RobinZ
comment by [deleted] · 2010-07-14T08:00:48.475Z · LW(p) · GW(p)

del

Replies from: John-Henry
comment by John-Henry · 2010-07-16T18:16:04.119Z · LW(p) · GW(p)

Pen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed.

Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.

Replies from: None
comment by [deleted] · 2010-07-16T19:20:31.461Z · LW(p) · GW(p)

del

comment by RobinZ · 2010-07-10T23:57:20.345Z · LW(p) · GW(p)

I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment.

comment by Vladimir_Nesov · 2010-07-09T13:48:30.352Z · LW(p) · GW(p)

Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.

Replies from: gimpf, AngryParsley
comment by gimpf · 2010-07-09T20:42:43.191Z · LW(p) · GW(p)

This is really an excellent, down to the earth, one minute teaser, to go that route.

Excellent writing. It would wish I had a follow up move for those who get interested after that points, but raised doubts, be it philosophical, religious, moral, scientific (the last one probably the easiest).

I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?"

I know the sequence answers, but I utterly fail to give any non-embarrassing answer at such questions. It does not help to not being signed up for cryonics oneself.

Replies from: JoshuaZ, Blueberry
comment by JoshuaZ · 2010-07-09T23:36:58.479Z · LW(p) · GW(p)

I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?

If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.)

As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved.

As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.

comment by Blueberry · 2010-07-09T20:50:47.051Z · LW(p) · GW(p)

It does not help to not being signed up for cryonics oneself.

I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation.

"But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?"

"Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."

Replies from: Larks
comment by Larks · 2010-07-09T22:29:23.664Z · LW(p) · GW(p)

"Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives.

... "and like any medical treatment, initially only the rich will benefit, but they'll help bring down the price for everyone else. Infact, for just a small weakly payment..."

comment by AngryParsley · 2010-07-11T10:18:11.622Z · LW(p) · GW(p)

This is off-topic but I'm curious: How did you stumble on my blog?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-11T10:31:07.288Z · LW(p) · GW(p)

Google alert on "Eliezer Yudkowsky". (Usually brings up articles about Friendly AI, SIAI and Less Wrong.)

comment by Matt_Simpson · 2010-07-18T07:04:25.680Z · LW(p) · GW(p)

Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.

Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).

  1. When tempted to write a critique or to run an experimental refutation of a recent publication, consider the possibility of proposing joint research under an agreed protocol. We call the scholars engaged in such an effort participants. If theoretical differences are deep or if there are large differences in experimental routines between the laboratories, consider the possibility of asking a trusted colleague to coordinate the effort, referee disagreements, and collect the data. We call that person an arbiter.
  2. Agree on the details of an initial study, designed to subject the opposing claims to an informative empirical test. The participants should seek to identify results that would change their mind, at least to some extent, and should explicitly anticipate their interpretations of outcomes that would be inconsistent with their theoretical expectations. These predictions should be recorded by the arbiter to prevent future disagreements about remembered interpretations.
  3. If there are disagreements about unpublished data, a replication that is agreed to by both participants should be included in the initial study.
  4. Accept in advance that the initial study will be inconclusive. Allow each side to propose an additional experiment to exploit the fount of hindsight wisdom that commonly becomes available when disliked results are obtained. Additional studies should be planned jointly, with the arbiter resolving disagreements as they occur.
  5. Agree in advance to produce an article with all participants as authors. The arbiter can take responsibility for several parts of the article: an introduction to the debate, the report of experimental results, and a statement of agreed-upon conclusions. If significant disagreements remain, the participants should write individual discussions. The length of these discussions should be determined in advance and monitored by the arbiter. An author who has more to say than the arbiter allows should indicate this fact in a footnote and provide readers with a way to obtain the added material.
  6. The data should be under the control of the arbiter, who should be free to publish with only one of the original participants if the other refuses to cooperate. Naturally, the circumstances of such an event should be part of the report.
  7. All experimentation and writing should be done quickly, within deadlines agreed to in advance. Delay is likely to breed discord.
  8. The arbiter should have the casting vote in selecting a venue for publication, and editors should be informed that requests for major revisions are likely to create impossible problems for the participants in the exercise.

This seems like a great way to resolve academic disputes. Philip Tetlock appears to be an advocate. What do you think?

comment by Wei Dai (Wei_Dai) · 2010-09-23T18:32:01.371Z · LW(p) · GW(p)

Since I assume he doesn't want to have existential risk increase, a credible threat is all that's necessary.

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail. See this comment.

(EDIT: I deleted the rest of this comment since it's redundant given what you've written elsewhere in this thread.)

Replies from: wedrifid, timtyler, waitingforgodel
comment by wedrifid · 2010-09-23T19:41:56.914Z · LW(p) · GW(p)

This is true, and yes wfg did imply the threat.

(Now, analyzing not advocating and after upvoting the parent...)

I'll note that wfg was speculating about going ahead and doing it. After he did it (and given that EY doesn't respond to threats speculative:wga should act now based on the Roko incident) it isn't threat. It is then just a historical sequence of events. It wouldn't even be a particularly unique sequence events.

Wfg is far from the only person who responded by punishing SIAI in a way EY would expect to increase existential risk. ie. Not donating to SIAI when they otherwise would have.or by updating their p(EY(SIAI) is a(re) crackpot(s)) and sharing that knowledge. The description RationalWiki would be an example.

comment by timtyler · 2010-09-24T19:52:45.362Z · LW(p) · GW(p)

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail.

I don't think he was talking about human beings there. Obviously you don't want a reputation for being susceptable to being successfully blackmailed, but IMHO, maximising expected utilily results in a strategy which is not as simple as never responding to blackmail threats.

Replies from: khafra
comment by khafra · 2010-09-24T20:29:46.047Z · LW(p) · GW(p)

I think this is correct. Eliezer's spoken from The Strategy of Conflict before, which goes into mathematical detail about the tradeoffs of precommitments against inconsistently rational players. The "no blackmail" thing was in regards to a rational UFAI.

comment by waitingforgodel · 2010-09-23T19:35:12.883Z · LW(p) · GW(p)

These are really interesting points. Just in case you haven't seen the developments on the thread, check out the whole thing here.

I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.

I'm also pretty sure it's irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense.

As an example, our entire legal system is based on these sorts of credible threats.

If EY feels differently I'm not sure what to say except that I think he's being foolish. I see the game theory he's pretending exempts him from considering others reactions to his actions, I just don't think it's rational to completely ignore new causal information.

But like I said earlier, I'm not saying he has to do anything, I'm just making sure we all know that an existential risk reduction of 0.0001% via LW censorship won't actually be a reduction of 0.0001%.

(and though you deleted the relevant part, I'd also be down to discuss what a sane moderation system should be like.)

Replies from: Wei_Dai, Alicorn, Nick_Tarleton, waitingforgodel
comment by Wei Dai (Wei_Dai) · 2010-09-24T08:18:21.559Z · LW(p) · GW(p)

Suppose I were to threaten to increase existential risk by 0.0001% unless SIAI agrees to program its FAI to give me twice the post-Singuarity resource allocation (or whatever the unit of caring will be) that I would otherwise receive. Can see why it might have a policy against responding to threats? If Eliezer does not agree with you that censorship increases existential risk, he might censor some future post just to prove the credibility of his precommitment.

If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments. I rather doubt that Eliezer has some sort of unfixable bug regarding censorship that has to be patched using such extreme measures. It's probably just that he got used to exercising strong moderation powers on SL4 (which never blew up like this, at least to my knowledge), and I'd guess that he has already updated on the new evidence and will be much more careful next time.

Replies from: wedrifid, waitingforgodel
comment by wedrifid · 2010-09-24T08:33:28.757Z · LW(p) · GW(p)

If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments.

I do not expect that (non-costly signalling by someone who does not have significant status) to work any more than threats would. A better suggestion would be to forget raw threats and consider what other alternatives wfg has available by which he could deploy an equivalent amount of power that would have the desired influence. Eliezer moved the game from one of persuasion (you should not talk about this) to one about power and enforcement (public humiliation, censorship and threats). You don't take a pen to a gun fight.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-24T22:01:57.665Z · LW(p) · GW(p)

I don't understand why, just because Eliezer chose to move the game from one of persuasion to one about power and enforcement, you have to keep playing it that way.

If Eliezer is really so irrational that once he has exercised power on some issue, he is no longer open to any rational arguments on that topic, then what are we all doing here? Shouldn't we be trying to hinder his efforts (to "not take over the world") instead of (however indirectly) helping him?

comment by waitingforgodel · 2010-09-24T09:30:30.285Z · LW(p) · GW(p)

Good questions, these were really fun to think about / write up :)

First off let's kill a background assumption that's been messing up this discussion: that EY/SIAI/anyone needs a known policy toward credible threats.

It seems to me that stated policies to credible threats are irrational unless a large number of the people you encounter will change their behavior based on those policies. To put it simply: policies are posturing.

If an AI credibly threatened to destroy the world unless EY became a vegetarian for the rest of the day, and he was already driving to a BBQ, is eating meat the only rational thing for him to do? (It sure would prevent future credible threats!)

If EY planned on parking in what looked like an empty space near the entrance to his local supermarket, only to discover that on closer inspection it was a handicapped-only parking space (with a tow truck only 20 feet away), is getting his car towed the only rational thing to do? (If he didn't an AI might find out his policy isn't iron clad!)

This is ridiculous. It's posturing. It's clearly not optimal.

In answer to your question: Do the thing that's actually best. The answer might be to give you 2x the resources. It depends on the situation: what SIAI/EY knows about you, about the likely effect of cooperating with you or not, and about the cost vs benefits of cooperating with you.

Maybe there's a good chance that knowing you'll get more resources makes you impatient for SIAI to make a FAI, causing you to donate more. Who knows. Depends on the situation.

(If the above doesn't work when an AI is involved, how about EY makes a policy that only applies to AIs?)

In answer to your second paragraph I could withdraw my threat, but that would lessen my posturing power for future credible threats.

(har har...)

The real reason is I'm worried about what happens while I'm trying to convince him.

I'd love to discuss what sort of moderation is correct for a community like less wrong -- it sounds amazing. Let's do it.

But no way I'm taking the risk of undoing my fix until I'm sure EY's (and LW's) bugs are gone.

comment by Alicorn · 2010-09-23T19:41:02.054Z · LW(p) · GW(p)

I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.

More importantly, you aren't threatening to publicize something embarrassing to Eliezer if he doesn't comply, so it's technically extortion.

Replies from: Wei_Dai, waitingforgodel
comment by Wei Dai (Wei_Dai) · 2010-09-23T19:47:59.973Z · LW(p) · GW(p)

I think by "blackmail" Eliezer meant to include extortion since the scenario that triggered that comment was also technically extortion.

comment by waitingforgodel · 2010-09-23T19:43:58.249Z · LW(p) · GW(p)

That one also has negative connotation, but it's your thinking to bias as you please :p

Replies from: wedrifid
comment by wedrifid · 2010-09-24T04:49:39.356Z · LW(p) · GW(p)

Technical analysis does not imply bias either way. Just curiosity. ;)

comment by Nick_Tarleton · 2010-09-23T19:47:47.005Z · LW(p) · GW(p)

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail.

I'm also pretty sure it's irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense.

As an example, our entire legal system is based on these sorts of credible threats.

To be precise, not respond when whether or not one is 'blackmailed' is counterfactually dependent on whether one would respond, which isn't the case with the law. (Of course, there are unresolved problems with who 'moves first', etc.)

Replies from: waitingforgodel
comment by waitingforgodel · 2010-09-23T19:50:17.560Z · LW(p) · GW(p)

Fair enough, so you're saying he only responds to credible threats from people who don't consider if he'll respond to credible threats?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-09-23T19:52:53.019Z · LW(p) · GW(p)

Yes, again modulo not knowing how to analyze questions of who moves first (e.g. others who consider this and then make themselves not consider if he'll respond).

comment by waitingforgodel · 2010-09-23T19:47:18.146Z · LW(p) · GW(p)

To put that bit about the legal system more forcefully:

If EY really doesn't include these sorts of things in his thinking (he disregards US laws for reasons of game theory?), we have much bigger things to worry about right now than 0.0001% censorship.

comment by Risto_Saarelma · 2010-07-16T18:55:30.797Z · LW(p) · GW(p)

There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.

Might be relevant to the simple math of everything stuff.

Replies from: JamesPfeiffer
comment by JamesPfeiffer · 2010-07-23T03:35:44.129Z · LW(p) · GW(p)

For a teaser, the part about singing logarithms looks cool.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-14T20:43:05.236Z · LW(p) · GW(p)

From a recent newspaper story:

The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?

Replies from: Blueberry, whpearson, mchouza, Tyrrell_McAllister, Will_Newsome
comment by Blueberry · 2010-07-14T21:40:10.829Z · LW(p) · GW(p)

It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.

Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.

comment by whpearson · 2010-07-14T20:53:46.830Z · LW(p) · GW(p)

From the article (there is a near invisible more text button)

Calculating the actual odds of Ginther hitting four multimillion-dollar lottery jackpots is tricky. If Ginther's winning tickets were the only four she ever bought, the odds would be one in 18 septillion, according to Sandy Norman and Eduardo Duenez, math professors at the University of Texas at San Antonio.

And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)...

I did see an analysis of this somewhere, I'll try and dig it up. Here it is. There is hackernews commentary here.

I find this, from the original msnbc article, depressing

After all, the only way to win is to keep playing. Ginther is smart enough to know that's how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.

Teaching math.

Replies from: nhamann
comment by nhamann · 2010-07-15T19:34:28.266Z · LW(p) · GW(p)

I find this, from the original msnbc article, depressing

Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?

Replies from: whpearson, Blueberry
comment by whpearson · 2010-07-16T09:39:49.058Z · LW(p) · GW(p)

The former. It is also depressing because it can be used in articles on the lottery in the following way, "See look at this person good at maths, playing the lottery, that must mean it is a smart thing to play the lottery".

comment by Blueberry · 2010-07-16T09:23:48.212Z · LW(p) · GW(p)

Depressing because someone with a Ph.D. in math is playing the lottery. I don't see any reason to think she figured out some way of beating the lottery.

comment by mchouza · 2010-07-15T17:43:26.637Z · LW(p) · GW(p)

It's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.

comment by Tyrrell_McAllister · 2010-07-14T20:50:39.348Z · LW(p) · GW(p)

The most eyebrow-raising part of that article:

After all, the only way to win is to keep playing. Ginther is smart enough to know that's how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.

Teaching math.

comment by Will_Newsome · 2010-07-15T23:32:00.088Z · LW(p) · GW(p)

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth.

Hm. Have you looked at the multiverse lately? It's pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance.

It's the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn't been explicitly conditioned on your anthropic surprise. I'm not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That's probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts.

(Added: In this particular case, something less metaphysical is probably amiss, like a math error.)

Replies from: Will_Newsome, Eliezer_Yudkowsky
comment by Will_Newsome · 2010-07-16T02:48:44.946Z · LW(p) · GW(p)

So let me try to rewrite that (and don't be afraid to call this word salad):

(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)

You're Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn't it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn't that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can't see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you're a delusional narcissist, but there's not much to do about that.)

Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.

These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.

Maybe you're a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It's the rational thing to do.

(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don't take it too seriously) at some point Michael Vassar was really confused about why he didn't find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)

(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won't ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)

Replies from: Kevin, Douglas_Knight
comment by Kevin · 2010-07-16T03:27:53.617Z · LW(p) · GW(p)

It all adds up to normality, damn it!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T03:29:55.096Z · LW(p) · GW(p)

What whats to what?

More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn't help you figure out how it is exactly that the adding gets done. It doesn't help distinguish between hypotheses. For that we need Solomonoff's lightsaber. I don't see how saying "it (whatever 'it' is) adds up to (whatever 'adding up to' means) normality (which I think should be 'reality')" is at all helpful. Reality is reality? Evidence shouldn't contradict itself? Cool story bro, but how does that help me?

Replies from: Kevin
comment by Douglas_Knight · 2010-07-16T05:35:59.570Z · LW(p) · GW(p)

it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive.

This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T17:26:24.321Z · LW(p) · GW(p)

Hm, I'm not sure I follow: both a classical and quantum universe are cheap, yes, but if you're using a speed prior or any prior that takes into account computational expense, then it's the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.

I could very, very well just be confused.

Added: Ah, sorry, I think I missed your point. You're saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations... when you compare anything to the multiverse of all things, that computation looks cheap. I guess we're just using different scales of comparison: I'm comparing to finite computations, you're comparing to a multiverse.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-17T00:55:50.888Z · LW(p) · GW(p)

No, that's not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you're saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-16T01:46:05.350Z · LW(p) · GW(p)

Whowha?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T02:03:38.457Z · LW(p) · GW(p)

Er, sorry, I'm guessing my comment came across as word salad?

Added: Rephrased and expanded and polemicized my original comment in a reply to my original comment.

Replies from: Kevin, Vladimir_Nesov
comment by Kevin · 2010-07-16T02:06:19.073Z · LW(p) · GW(p)

Yeah I didn't get it either.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T02:08:23.915Z · LW(p) · GW(p)

Hm. It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption. I'll try to rewrite that whole comment when I'm less tired.

Replies from: Vladimir_Nesov, Kevin
comment by Vladimir_Nesov · 2010-07-16T08:47:18.206Z · LW(p) · GW(p)

It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption.

Illusion of transparency: they can probably generate sense in response to anything, but it's not necessarily faithful translation of what you say.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T17:19:17.740Z · LW(p) · GW(p)

Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I'm saying doesn't seem to be meaningful, which is a wonderful trait.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-16T21:47:43.258Z · LW(p) · GW(p)

Well, that was me calling bullshit.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-17T16:14:26.507Z · LW(p) · GW(p)

Thanks! But it seems you're being needlessly abrasive about it. Perhaps it's a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it's hard to convey philosophical intuitions.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-17T17:59:35.087Z · LW(p) · GW(p)

I find myself unable to clearly articulate what's wrong with your idea, but in my own words, it reads as follows:

"One should believe certain things to be probable because those are the kinds of things that people believe through magical thinking."

comment by Vladimir_Nesov · 2010-07-17T17:18:40.032Z · LW(p) · GW(p)

The problem with that idea is that there is no default level of belief. You are not allowed to say

These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.

What is the difference between hesitating to assign negligible probability vs. to assign non-negligible probability? Which way is the certainty, which way is doubt? If you don't have good understanding of why you should believe one way or the other, you can't appoint a direction where safe level of credence lies and stay there pending the enlightenment.

Your argument is not strong enough to shift the belief of one in septillion up to something believable, but your argument must be that strong to do it. You can't appeal to being hesitant to believe otherwise, it's not a strong argument, but a statement about not having one.

comment by Kevin · 2010-07-16T02:18:27.315Z · LW(p) · GW(p)

Was your point that Eliezer's Everett Branch is weird enough already that it shouldn't be that surprising if universally improbable things have occurred?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T02:57:25.084Z · LW(p) · GW(p)

Erm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.

comment by Vladimir_Nesov · 2010-07-16T08:41:58.183Z · LW(p) · GW(p)

Er, sorry, I'm guessing my comment came across as word salad?

I'm afraid it is word salad.

comment by whpearson · 2010-07-10T13:14:01.218Z · LW(p) · GW(p)

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.

Replies from: Emile, rhollerith_dot_com, katydee
comment by Emile · 2010-07-10T15:23:49.014Z · LW(p) · GW(p)

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?

I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".

How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.

Replies from: whpearson
comment by whpearson · 2010-07-10T23:26:46.671Z · LW(p) · GW(p)

He is not high status as such, although he possibly could be if he didn't waste time being drunk.

Okay here goes the broad brush description of characters. Feel free to ask more questions to fill in details that you want.

Dramatis Personae

Mr G: Me. Tall scruffy geek. Takes little care of appearance. Tidy in social areas. Chats to everyone, remembers details of peoples lives, although forgets peoples names. Not particularly close (not facebook friends with any of the others). Doesn't bring girls/friends home. Can tell a joke or make a humorous observation but not a master, can hold his own in banter though. Little evidence of social circle apart from occasional visits to friends far away. Accommodating to peoples niggles and competent at fixing stuff that needs fixing. Does a fair amount of house work, because it needs doing. Has never suggested going out with the others, but has gone out by himself to random things. Is often out doing work at uni when others are at home. Shares some food with others, occasionally.

Miss C: Assertive, short, fairly plump Canadian supply teacher. Is mocked by Mr S for canadianisms, especially when teaching the children that the British idiom is wrong. For example saying that learnt is not a word. Young, not very knowledgeable about current affairs/world. Boyfriend back home. Has smoked pot. Drinks and parties on the weekend, generally going out with friends from home. Facebook friends with the other 2 (I think). Fairly liberal. Came into the house a week before Mr G. Watches a lot of TV in the shared area. Has family and friends visit occasionally.

Miss B: Works in digital marketing (does stuff on managing virals). Dry sense of humour. Boyfriend occasionally comes to visit, boyfriend is teacher who wants to be a stand up comedian. Is away most weekends, visiting family or boyfriend. Gets on with everyone on a surface level. Fairly pretty although not a stunner. Can banter a bit, but not much. Plays up to the "ditzy" personae sometimes.

Favourite Family Guy character is Brian. Scared of spiders/insects, to the extent that she dreamt a giant spider was on her pillow and didn't know it was a dream and shrieked (was investigated by Mr G to make sure there was nothing). Newest house mate by a couple of months. Probably a bit more conservative than the rest of the house.

Mr S: Self described cocky ex-skater. Well travelled. Older than the others. Takes care in his dress although not uber smart, has expensive trainers. Has had quite a few dates and 3 girlfriends of various lengths of time, in the 10 months I have been here. Fairly high quality girls, from the evidence I have seen. Talks to everyone. Witty, urbane. Generous with his food and drink.

Does some house work, makes sure everyone knows about it. Fairly emotional. He complains about Miss C not doing housework to Mr G, upon occasion. On one occasion when Mr G does not reciprocate in the complaints, he gets angry, but not in a serious way. Apologises the day after.

Self-identifies as geek of sorts to Mr G to try and sway Mr G on various points. Mr G less than enthused. Asks people how they stuff makes them feel.

His main problem is his drink and pot. He drinks himself to a stupor with almost clockwork regularity (he can be reliably zonked out on the living room sofa on a Sunday) and gets the munchies and steals food. He is always apologetic and replaces it when he does so and is confronted. Mr G doesn't confront when he suspects food has gone missing, although Miss C seems to get most of the food stolen and is most confrontational.

Often forgets conversations he has had when drunk. Gets angry upon occasions and crashes around slamming doors. He doesn't feel dangerous to Mr G at these points. Leaves the oven on with food in while asleep on said sofa. Miss B is worried by this behaviour and tells Mr G. Mr G not overly worried for himself, but can see her point.

The final straw that lead to being kicked out was when he was found walking around naked by Miss B in the kitchen. Miss C had tried to get him kicked out previously for eating food. Mr G was away.

He has lived in this flat for a year with other people and I don't think his behaviour has changed, so why did this set of people get him kicked out, when others hadn't? I'm guessing the moral of the story is don't be an alcoholic in general. But some people put up with worse behaviour.

Replies from: Blueberry, RobinZ
comment by Blueberry · 2010-07-10T23:39:39.855Z · LW(p) · GW(p)

This description seems very British and I'm not quite clear on some of it. For instance, I had no idea what a strop is. Urban Dictionary defines it as sulking, being angry, or being in a bad mood.

Some of the other things seem like they would only make sense with more cultural context, specifically the emphasis on bantering and making witty remarks.

I wouldn't say that this guy has great social skills, given his getting drunk and stealing food, slamming doors and walking around naked, and so forth. Pretty much the opposite, in fact.

As to why he got kicked out, I guess people finally got tired of the way he acted, or this group of people was less tolerant of it.

Replies from: whpearson
comment by whpearson · 2010-07-10T23:49:46.275Z · LW(p) · GW(p)

By social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl. The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here.

Sorry for the british-ness. It is hard to talk about social situations without thinking in my native idiom. I'll try and translate it tomorrow.

Replies from: Blueberry
comment by Blueberry · 2010-07-11T09:41:23.675Z · LW(p) · GW(p)

By social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl.

You're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others.

The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here.

Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme).

It is hard to talk about social situations without thinking in my native idiom.

Understood, and thanks for writing that story; it was really interesting. The whole British way of thinking is foreign to this clueless American, and I'm curious about it. (I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)

Replies from: whpearson, wedrifid
comment by whpearson · 2010-07-11T11:19:06.163Z · LW(p) · GW(p)

You're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others.

Interesting, I wouldn't have said that they were as independent as you make out. I'd say it is unusual to be confidant with good acting ability and not be able to read social nuances (how do you know how you should act?). And confidance is definately part of the PUA skillset. Apart from that I'd agree, there are different levels of skill.

When sober he was fairly good at everything. He would steer the conversations where he wanted, generally organise the flat to his liking and not do anything stupid like going around naked. If you looked at our interactions as a group, he would have appeared the Alpha.

His excuse for wandering around naked was that he thought he was alone and that he should have the right to go into the kitchen naked if he wanted to. I.e. he tried to brazen it out. That might give you some idea of his attitude, what he expected to get away with and that he had probably gotten away with it in the past.

Apart from the lack of common sense (when very drunk), I think his main problem was underestimating people or at least not being able to read them. He was too reliant on his feeling of being the Alpha to realise his position was tenuous. No one was relying upon the flat as their main social group, so no one cared about him being Alpha of that group.

Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme).

You might get upset but still not be able to do anything against the Guy. See Highschool.

(I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)

People use Facebook in a myriad of different ways. Some people friend everyone they come across, which means their friends lists gives little information. Mine is to keep an eye on the doings of people I care about. People I don't care about just add noise. So mine is more informative than most. Mr S. is very promiscuous with over 700 friends, I'm not sure about the other two.

comment by wedrifid · 2010-07-11T11:31:03.504Z · LW(p) · GW(p)

You're conflating a few different things here.

I just assumed that for the sake of brevity he covered the other aspects under "etc". I would add in "intuitive aptitude for Machiavellian social politics".

comment by RobinZ · 2010-07-11T00:06:47.368Z · LW(p) · GW(p)

Drinks and parties on the weekend, generally going out not with Miss B.

Do I correctly interpret this to say that both Miss C and Miss B goes out (drinking?) on the weekends, but not together?

Replies from: whpearson
comment by whpearson · 2010-07-11T06:44:21.783Z · LW(p) · GW(p)

Yup. Sorry, that wasn't clear.

comment by RHollerith (rhollerith_dot_com) · 2010-07-10T18:53:09.161Z · LW(p) · GW(p)

Yes. And do not hesitate to use many many words.

comment by katydee · 2010-07-10T19:00:10.008Z · LW(p) · GW(p)

Definitely.

comment by Blueberry · 2010-08-06T12:32:53.133Z · LW(p) · GW(p)

Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)

But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.

comment by bogus · 2010-07-26T15:37:58.609Z · LW(p) · GW(p)

Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

comment by Vladimir_Nesov · 2010-07-22T13:14:45.160Z · LW(p) · GW(p)

Paul Graham on guarding your creative productivity:

I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]

You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.

comment by MBlume · 2010-07-17T00:50:50.184Z · LW(p) · GW(p)

So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.

Now, being a professional exorcist does not give a high prior for rationality.

But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.

I wonder if this uncriticality has anything to do with, well, not expecting to be criticized. If most of the hacks that humans use in place of rationality are socially motivated, we can safely turn them off when speaking to a child who doesn't know any better.

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

Replies from: LucasSloan, jimmy
comment by LucasSloan · 2010-07-17T02:52:44.498Z · LW(p) · GW(p)

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

Probably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.

Replies from: rwallace
comment by rwallace · 2010-07-27T16:01:52.562Z · LW(p) · GW(p)

While I have difficulty imagining what someone far smarter than myself would say, what I can do is imagine explaining myself to a smart person who doesn't have my particular set of biases and hangups; and I find that does sometimes help.

comment by jimmy · 2010-07-17T05:35:03.874Z · LW(p) · GW(p)

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

I do it sometimes, and I think it helps.

Replies from: None
comment by [deleted] · 2010-07-26T14:44:23.860Z · LW(p) · GW(p)

I do it too - using some of the smarter and more critical posters on LW, actually - and I also think it helps. I think this diffuses some of LucasSloan's criticisms below - if it's a real person, you can to a reasonable extent imagine how they might reply.

I think it works because placing yourself in a conflict (even an imaginary one) narrows and sharpens your focus as the subconscious processes get activated that try to 'win' it.

The risk is though, that like any opinion formed or argued under the presence of an emotion, is that you become unreasonably certain of it.

Replies from: jimmy
comment by jimmy · 2010-07-26T18:10:17.849Z · LW(p) · GW(p)

I don't get the 'conflict' feeling when I do it. It feels more like 'betting mode', but with more specific counterarguments. Since it's all imaginary anyway, I don't feel committed enough to one side to activate conflict mode.

comment by mindviews · 2010-07-11T09:07:23.643Z · LW(p) · GW(p)

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

Replies from: Vladimir_Golovin, Nisan
comment by Vladimir_Golovin · 2010-07-11T10:30:14.712Z · LW(p) · GW(p)

This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".

comment by Nisan · 2010-07-17T20:01:35.427Z · LW(p) · GW(p)

It might also be a useful tool for attaining self-knowledge outside of goal-directed behavior. Consider this passage from The Aleph:

Turning the corner of Bernardo de Irigoyen, I reviewed as impartially as possible the alternatives before me. They were: a) to speak to Álvaro, telling him the first cousin of Beatriz' (the explanatory euphemism would allow me to mention her name) had concocted a poem that seemed to draw out into infinity the possibilities of cacophony and chaos: b) not to say a word to Álvaro. I clearly foresaw that my indolence would opt for b.

comment by Barry_Cotter · 2010-07-09T11:19:52.521Z · LW(p) · GW(p)

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell the computer to do

# Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz #

I ask out of interest in acquiring money, on elance, rentacoder, odesk etc. I'm starting from a position of total ignorance but y'know it doesn't seem like learning C, and understanding Conrete Mathematics and TAOCP in a useful or even deep way would be the work of more than a year, while it would place one well above average in some domains of this activiteity.

Or have I missed something really obvious and important?

Replies from: twanvl, wedrifid, JRMayne, cousin_it, gwern, Daniel_Burfoot, Morendil, MartinB, None
comment by twanvl · 2010-07-09T12:20:20.595Z · LW(p) · GW(p)

I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).

This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.

Replies from: Emile
comment by Emile · 2010-07-09T20:13:31.005Z · LW(p) · GW(p)

Joel Spolsky gave a similar explanation.

That means, in this horribly simplified universe, that the entire world could consist of 1,000,000 programmers, of whom the worst 199 keep applying for every job and never getting them, but the best 999,801 always get jobs as soon as they apply for one. So every time a job is listed the 199 losers apply, as usual, and one guy from the pool of 999,801 applies, and he gets the job, of course, because he's the best, and now, in this contrived example, every employer thinks they're getting the top 0.5% when they're actually getting the top 99.9801%.

Makes sense.

I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).

Replies from: sketerpot
comment by sketerpot · 2010-07-10T20:36:52.914Z · LW(p) · GW(p)

Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer.

Is this reassuring?

Replies from: Emile, cousin_it
comment by Emile · 2010-07-10T21:05:33.335Z · LW(p) · GW(p)

Do you consider fizzbuzz trivial?

Yes

Could you write an interpreter for a simple Forth-like language, if you wanted to?

Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky.

Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D

comment by cousin_it · 2010-07-10T20:57:09.145Z · LW(p) · GW(p)

I'm probably not horribly incompetent (evidence: this and this), but there exist people who are miles above me, e.g. John Carmack (read his .plan files for a quick fix of inferiority) or Inigo Quilez who wrote the 4kb Elevated demo. Thinking you're "good enough" is dangerous.

comment by wedrifid · 2010-07-09T15:27:34.343Z · LW(p) · GW(p)

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.

comment by JRMayne · 2010-07-10T03:06:37.221Z · LW(p) · GW(p)

Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz

I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy.

(I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)

Replies from: sketerpot
comment by sketerpot · 2010-07-10T20:34:38.390Z · LW(p) · GW(p)

The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.)

Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-10T22:20:30.360Z · LW(p) · GW(p)

I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.

This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)

comment by cousin_it · 2010-07-09T12:07:48.044Z · LW(p) · GW(p)

Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated.

Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this:

for x in range(1, 101):
    fizz = (x%3 == 0)
    buzz = (x%5 == 0)
    if fizz and buzz:
        print "FizzBuzz"
    elif fizz:
        print "Fizz"
    elif buzz:
        print "Buzz"
    else:
        print x

This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter and use it to solve some Project Euler problems for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.

Replies from: SilasBarta, Liron
comment by SilasBarta · 2010-07-09T12:41:43.341Z · LW(p) · GW(p)

How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?

Replies from: jimrandomh, cousin_it, xamdam
comment by jimrandomh · 2010-07-09T15:02:25.838Z · LW(p) · GW(p)

The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration.

(I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)

Replies from: Vladimir_Nesov, SilasBarta
comment by Vladimir_Nesov · 2010-07-09T15:12:45.128Z · LW(p) · GW(p)

I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.

I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).

comment by SilasBarta · 2010-07-09T17:06:43.846Z · LW(p) · GW(p)

Do recruiting firms require that you have formal programming credentials?

Replies from: jimrandomh
comment by jimrandomh · 2010-07-09T17:24:49.540Z · LW(p) · GW(p)

Formal credentials certainly help, but I wouldn't say they're required, as long as you have something (such as a completed project) to prove you have skills.

comment by cousin_it · 2010-07-09T13:05:12.028Z · LW(p) · GW(p)

My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview.

Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-)

ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.

Replies from: Blueberry
comment by Blueberry · 2010-07-09T16:03:42.196Z · LW(p) · GW(p)

I was very surprised by this until I read the word "Moscow."

Replies from: cousin_it, gwern
comment by cousin_it · 2010-07-09T16:20:07.538Z · LW(p) · GW(p)

Is it different in the US? I imagined it was even easier to find a job in the Valley than in Moscow.

comment by gwern · 2010-07-10T10:08:53.574Z · LW(p) · GW(p)

I was unsurprised by this until I read the word "Moscow". (Russian programmers & mathematicians seem to always be heading west for jobs.)

comment by xamdam · 2010-07-09T16:29:27.929Z · LW(p) · GW(p)

I took an internship after college. Professors can always use (exploit) programming labor. That gives you semi-real experience (might be very real if the professor is good) and allows you to build credibility and confidence.

comment by Liron · 2010-07-09T16:52:09.756Z · LW(p) · GW(p)

Python tip: Using "range" creates a big list in memory, which is a waste of space. If you use xrange, you get an iterable object that only uses a single counter variable.

Replies from: cousin_it, Emile
comment by cousin_it · 2010-07-09T16:57:44.076Z · LW(p) · GW(p)

Hah. I first wrote the example using xrange, then changed it to range to make it less confusing to someone who doesn't know Python :-)

comment by Emile · 2010-07-09T20:22:21.283Z · LW(p) · GW(p)

Not in python 3 ! range in Python 3 works like xrange in the previous versions (and xrange doesn't exist any more).

(but the print functions would use a different syntax)

Replies from: apophenia, billswift, Liron
comment by apophenia · 2010-07-10T19:30:40.685Z · LW(p) · GW(p)

In fact, range in Python 2.5ish and above works the same, which is why they removed xrange in 3.0.

comment by billswift · 2010-07-10T11:18:24.169Z · LW(p) · GW(p)

There was a discussion of transitioning to Python 3 on HN a week or two ago; apparently there are going to be a lot of programmers, and even more shops, holding off on transitioning, because it will break too many existing programs. (I haven't tried Python since version 1, so I don't know anything about it myself.)

Replies from: Emile
comment by Emile · 2010-07-10T15:13:22.601Z · LW(p) · GW(p)

A big problem with transitioning to Python 3 is that there are quite a few third-party libraries that don't support it (including two I use regularly - SciPy and Pygame). Some bits of the syntax are different, but that shouldn't be a huge issue except for big codebases, since there's a script to convert Python 2.6 to 3.0.

I've used Python 3 but had to switch back to 2.6 so I could keep using those libraries :P

comment by Liron · 2010-07-10T01:49:22.333Z · LW(p) · GW(p)

Cool

comment by gwern · 2010-07-10T10:04:28.828Z · LW(p) · GW(p)

Most people find the concept of programming obvious, but the doing impossible.

--"Epigrams in Programming", by Alan J. Perlis; ACM's SIGPLAN publication, September, 1982

comment by Daniel_Burfoot · 2010-07-09T13:52:39.512Z · LW(p) · GW(p)

I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.

Replies from: gwern, Blueberry, apophenia
comment by gwern · 2010-07-10T10:06:18.692Z · LW(p) · GW(p)

Programming as a field exhibits a weird bimodal distribution of talent.

Do we have any refs for this? I know there's "The Camel Has Two Humps" (Alan Kay on it, the PDF), but anything else?

Replies from: Sniffnoy, Daniel_Burfoot
comment by Sniffnoy · 2010-07-10T18:19:23.033Z · LW(p) · GW(p)

And going by his other papers, though, it looks like the effect isn't nearly so strong as was originally claimed. (Though that's wrt whether his "consistency test" works, didn't check about whether bimodalness still holds.)

comment by Daniel_Burfoot · 2010-07-10T15:19:22.401Z · LW(p) · GW(p)

No, just personal experience and observation backed up by stories and blog posts from other people. See also Joel Spolsky on Hitting the High Notes. Spolsky's line is that some people are just never going to be that good at programming. I'd rephrase it as: some people are just never going to be motivated to spend long hours programming for the sheer fun and challenge of it, and so they're never going to be that good at programming.

Replies from: RobinZ
comment by RobinZ · 2010-07-10T16:05:08.274Z · LW(p) · GW(p)

I'd rephrase it as: some people are just never going to be motivated to spend long hours programming for the sheer fun and challenge of it, and so they're never going to be that good at programming.

This is a good null hypothesis for skill variation in many cases, but not one supported by the research in the paper gwern linked.

comment by Blueberry · 2010-07-09T16:01:02.347Z · LW(p) · GW(p)

Most of them could make better money in the I-Bank/medicine/MBA path.

Fixed that for you. :) (I'm a current law student.)

comment by apophenia · 2010-07-10T19:29:15.461Z · LW(p) · GW(p)

In addition to this, if you're a good bricklayer, you might do, at most, twice the work of a bad bricklayer. It's quite common for an excellent programmer (a hacker) to do more work than ten average programmers--and that's conservative. The difference is more apparent. My guess might be that you hear this complaint from good programmers, Barry?

Although, I can guarantee that everyone I've met can do at least FizzBuzz. We have average programmers, not downright bad ones.

comment by Morendil · 2010-07-09T12:46:22.220Z · LW(p) · GW(p)

I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you.

The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace.

Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.

comment by MartinB · 2010-07-09T11:39:54.755Z · LW(p) · GW(p)

i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.

http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG

Replies from: sketerpot
comment by sketerpot · 2010-07-10T20:46:31.853Z · LW(p) · GW(p)

It seems there are those that get pointers, and the others.

Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.

Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?

Replies from: wedrifid, cata, apophenia, Kingreaper, rwallace
comment by wedrifid · 2010-07-11T01:35:47.177Z · LW(p) · GW(p)

Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.

There really are people who would not take that few hours.

comment by cata · 2010-07-10T21:16:58.148Z · LW(p) · GW(p)

I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.

So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.

comment by apophenia · 2010-07-10T22:18:52.843Z · LW(p) · GW(p)

There are really people who don't get pointers.

Replies from: Morendil
comment by Morendil · 2010-07-10T22:30:20.423Z · LW(p) · GW(p)

One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)

Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...

Replies from: apophenia
comment by apophenia · 2010-07-10T22:59:46.892Z · LW(p) · GW(p)

Usually, of course, the first parameter of that function was the data structure itself...

Cute. Sad, but that's already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven't yet, since you've no doubt progressed since your epiphany. I've heard nice things about statically typed languages such as Haskell and O'Caml, and my personal favorite is Scheme.

Replies from: sketerpot
comment by sketerpot · 2010-07-11T17:50:09.048Z · LW(p) · GW(p)

Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That's how the OO works; it's how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery's each() function, which lets you pass in a function which iterates over every element in a collection.

The clearest, most concise book on this is Doug Crockford's Javascript: The Good Parts. Highly recommended.

Replies from: apophenia
comment by apophenia · 2010-07-13T04:09:58.641Z · LW(p) · GW(p)

The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker:

(define (make-counter)

(let ([internal-variable 0])

  (lambda ()

          (begin (set! internal-variable (+ internal-variable 1))

                internal-variable))))

To create a counter, you'd do something like

(define counter1 (make-counter 0))

(define counter2 (make-counter 0))

Then, to get values from the counter, you could call something like

(display (counter1))

(display (counter2))

(display (counter2))

(display (counter2))

(display (counter1))

(display (counter2)) which prints: 1, 1, 2, 3, 2, 4.

Here is the same example in Python, since that's what most people seem to be posting in:

def make_counter():

def counter(internal_variable=[0]):

   internal_variable[0] += 1

   return internal_variable[0]

return counter

counter1 = make_counter()

counter2 = make_counter()

print(counter1())

print(counter2())

print(counter2())

print(counter2())

print(counter1())

print(counter2())

*That is, a function which you can pass around like a value.

Replies from: sketerpot
comment by sketerpot · 2010-07-13T05:40:21.560Z · LW(p) · GW(p)

While we're sharing fun information, I'd like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it's treated as code. Behold:

(define (make-counter)
  (let ([internal-variable 0])
    (lambda ()
      (begin (set! internal-variable (+ internal-variable 1))
         internal-variable))))

Also, the emacs rectangle editing functions are good for this. C-x r t is a godsend.

comment by Kingreaper · 2010-07-11T02:29:03.641Z · LW(p) · GW(p)

I suspect it's like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them. But some people do.

And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used.

But they've become so ingrained in my brain now that failure to understand them is nigh inconceivable.

comment by rwallace · 2010-07-10T23:48:06.316Z · LW(p) · GW(p)

There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn't work; a few try to explain graphically, which doesn't work terribly well. I've met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation.

The right explanation is in terms of numbers: the key is that char *x actually means the same thing as int x (on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.

comment by [deleted] · 2010-07-11T03:25:38.928Z · LW(p) · GW(p)

Yeah, pretty much anyone who isn't appallingly stupid can become a reasonably good programmer in about a year. Be warned though, the kinds of people who make good programmers are also the kind of people who spontaneously find themselves recompiling their Linux kernel in order to get their patched wifi drivers to work...

Replies from: RobinZ
comment by RobinZ · 2010-07-11T03:28:29.255Z · LW(p) · GW(p)

xkcd reference!

Replies from: None
comment by [deleted] · 2010-07-11T04:44:45.525Z · LW(p) · GW(p)

Dammit! That'll shouted at my funeral!

comment by jimmy · 2010-07-15T22:21:30.995Z · LW(p) · GW(p)

http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/28/risky-business-james-bagian-nasa-astronaut-turned-patient-safety-expert-on-being-wrong.aspx

This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.

comment by Yoreth · 2010-07-11T04:25:06.855Z · LW(p) · GW(p)

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.

It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.

Replies from: Tyrrell_McAllister, orthonormal, Vladimir_M, JoshuaZ, mindviews, Larks, Emile, wedrifid, zero_call, Bongo
comment by Tyrrell_McAllister · 2010-07-11T23:37:18.647Z · LW(p) · GW(p)

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

comment by orthonormal · 2010-07-11T23:19:26.269Z · LW(p) · GW(p)

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.

Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?

comment by Vladimir_M · 2010-07-11T07:13:00.139Z · LW(p) · GW(p)

Yoreth:

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.

Replies from: Emile, wedrifid
comment by Emile · 2010-07-11T12:55:18.589Z · LW(p) · GW(p)

The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology.

For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut.

(What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)

comment by wedrifid · 2010-07-11T08:19:19.728Z · LW(p) · GW(p)

That's an extremely bad way to draw conclusions.

A bad way to draw conclusions. A good way to make significant updates based on inference.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-11T09:18:28.074Z · LW(p) · GW(p)

Would you be so kind as to spell out the exact sort of "update based on inference" that applies here?

Replies from: wedrifid
comment by wedrifid · 2010-07-11T11:13:00.310Z · LW(p) · GW(p)

???

"People who say stupid things are, all else being equal, more likely to say other stupid things in related areas".

Replies from: Vladimir_M, Bongo
comment by Vladimir_M · 2010-07-11T18:12:59.633Z · LW(p) · GW(p)

That's a very vague statement, however. How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-07-12T08:59:43.431Z · LW(p) · GW(p)

How exactly

Nobody knows (obviously), but you can try to train your intuition to do that well. You'd expect this correlation to be there.

comment by wedrifid · 2010-07-12T03:50:59.449Z · LW(p) · GW(p)

That's a very vague statement

In the context of LessWrong it should be considered trivial to the point of outright patronising if not explicitly prompted. Bayesian inference is quite possibly the core premise of the community.

How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?

In the process of redacting my reply I coined the term "Freudian Double-Entendre". Given my love of irony I hope the reader appreciates my restraint! <-- Example of a very vague statement. In fact if anyone correctly follows that I expect I would thoroughly enjoy reading their other comments.

comment by Bongo · 2010-07-11T15:36:43.972Z · LW(p) · GW(p)

People who say stupid things are, all else being equal, more likely to say other stupid things in related areas

Yep, and note that Hegel's philosophy is related to states more than Newton's physics is related to the occult.

comment by JoshuaZ · 2010-07-11T15:56:18.788Z · LW(p) · GW(p)

It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.

Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows:

Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution"

All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits.

've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?

comment by mindviews · 2010-07-11T09:52:23.922Z · LW(p) · GW(p)

Is there any philosophy worth reading?

Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile.

I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in.

PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.

comment by Larks · 2010-07-18T19:50:32.295Z · LW(p) · GW(p)

I've always been told that Hegel basically affixed the section about Prussia due to political pressures, and that modern philosophers totally ignore it. Having said that, I wouldn’t read Hegel.

I recommend avoiding reading original texts, and instead reading modern commentaries and compilations. 'Contemporary Readings in Epistemology' was the favoured first-year text at Oxford. Bertrand Russell's "History of Western Philosophy" is quite a good read too.

The Stanford Encyclopaedia of Philosophy is also very good.

comment by Emile · 2010-07-11T12:48:20.524Z · LW(p) · GW(p)

I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.

comment by wedrifid · 2010-07-11T04:29:40.790Z · LW(p) · GW(p)

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

None that actively affiliate themselves with the label 'philosophy'.

comment by zero_call · 2010-07-11T22:23:12.257Z · LW(p) · GW(p)

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end.

What philosophical works and authors have you found especially valuable, for whatever reason?

I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.

comment by Bongo · 2010-07-11T11:41:25.609Z · LW(p) · GW(p)

Maybe LW should have resident intellectual historians who read philosophy. They could distill any actual insights from dubious, old or badly written philosophy, and tell if a work is worthy reading for rationalists.

comment by SilasBarta · 2010-07-11T01:32:09.348Z · LW(p) · GW(p)

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be warned, though: it's mixed with a lot of blame-the-government themes and certainty about future hyperinflation, and the preservation of real estate's value therein, if that kind of thing turns you off.

Edit: Okay, I've edited this comment about eight times now, but I left this out: from a rationality perspective, this essay shows the worst parts of Goodhart's Law: apparently, the old, functional criteria that would correctly identify some mortgage applicants is going to be mandated as the standard on all future mortgages. Yikes!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-11T11:33:20.923Z · LW(p) · GW(p)

I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.

comment by Tyrrell_McAllister · 2010-07-09T16:35:10.457Z · LW(p) · GW(p)

I have a question about prediction markets. I expect that it has a standard answer.

It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.

Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?

Replies from: Unnamed, Vladimir_Nesov, orthonormal, Dagon
comment by Unnamed · 2010-07-09T17:45:12.650Z · LW(p) · GW(p)

One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager).

Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.

Replies from: Strange7
comment by Strange7 · 2010-07-12T12:42:22.095Z · LW(p) · GW(p)

I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong.

Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.

comment by Vladimir_Nesov · 2010-07-09T18:43:46.185Z · LW(p) · GW(p)

The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-09T20:59:40.982Z · LW(p) · GW(p)

The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.

Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players.

But that's not what I was trying to get at with "predictably inefficient". What I meant was this:

Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot.

Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory?

I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem?

I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.

Replies from: Unnamed, orthonormal
comment by Unnamed · 2010-07-09T23:04:10.998Z · LW(p) · GW(p)

Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38.

This table shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-11T23:16:30.652Z · LW(p) · GW(p)

Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38.

It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36?

I'm thinking now that the entire answer to my question is contained in Dagon's reply. Perhaps the gambling market is distorted by regulation, and its failure as a prediction market is entirely due to these regulations. Without such regulations, maybe the gambling business would function much more like an accurate prediction market, which I suppose would make it seem like a much less enticing business to go into.

This would imply that, if you don't like casinos, you should want regulation on gambling to focus entirely on making sure that casinos don't use violence to keep other casinos from operating. Then maybe we'd see the casinos compete by bringing their odds closer to reality, which would, of course, make the casinos less profitable, so that they might close down of their own accord.

(Of course, I'm ignoring games that aren't entirely games of chance.)

Replies from: Blueberry
comment by Blueberry · 2010-07-12T12:00:03.002Z · LW(p) · GW(p)

It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36?

This really doesn't have much to do with the market. While I don't know the details of gambling laws in all the US states and Indian nations, I would be very surprised if there were regulations on roulette odds. Many casinos have roulette wheels with only one 0 (paid as if 1/36, actual odds 1/37), and with other casino games, such as blackjack, casinos frequently change the rules as part of a promotion or to try to get better odds.

There is no "gambling market": casinos are places where people pay for entertainment, not to make money. While casinos do offer promotions and advertise favorable rules and odds, most people go for the entertainment, and no one who's serious about math and probability goes to make money (with exceptions for card-counting and poker tournaments, as orthonormal notes).

Also see Unnamed's comment. Essentially, the answer is that a casino is not a market.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-12T16:37:38.489Z · LW(p) · GW(p)

Also see Unnamed's comment. Essentially, the answer is that a casino is not a market.

A single casino is not a market, but don't all casinos and gamblers together form a market for something? Maybe it's a market for entertainment instead of prediction ability, but it's a market for something, isn't it? Moreover, it seems, at least naïvely, to be a market in which a casino would attract more customers by offering more realistic odds.

Replies from: mattnewport
comment by mattnewport · 2010-07-12T18:07:03.306Z · LW(p) · GW(p)

Some casinos in Vegas have European roulette with a smaller house edge. I know this from a Vegas guidebook which listed where you could find the best odds at various games suggesting that at least some gamblers seek out the best odds. The Wikipedia link also states:

Today most casino odds are set by law, and they have to be either 34 to 1 or 35 to 1.

comment by orthonormal · 2010-07-09T22:28:39.373Z · LW(p) · GW(p)

In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).

comment by orthonormal · 2010-07-09T22:18:41.320Z · LW(p) · GW(p)

The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house.

If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-11T20:11:04.623Z · LW(p) · GW(p)

The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house.

But no casino has a monopoly on roulette. Yet the market doesn't seem to drive the odds to their correct values. Dagon notes above that regulations make it hard to enter the market as a casino. Maybe that explains why my naive expectations don't happen.

Actually this raises another question for me. If I start a casino in Vegas, am I required to sell roulette bets as though the odds were p = 1/36, instead of, say, p = 1/37 ?

[Edited for lack of clarity.]

comment by Dagon · 2010-07-09T21:03:10.377Z · LW(p) · GW(p)

Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.

Replies from: orthonormal
comment by orthonormal · 2010-07-09T22:14:33.137Z · LW(p) · GW(p)

Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.

comment by Emile · 2010-09-23T16:02:18.117Z · LW(p) · GW(p)

I think you're overestimating your ability to see what exactly is wrong and how to fix it. Humans (westerners?) are biased towards thinking that improvements they propose would indeed make things better. This tendancy is particularly visible in politics, where it causes the most damage.

More generally, humans are probably biased towards thinking their own ideas are particularly good, hence the "not invented here" syndrome, etc. Outside of politics, the level of confidence rarely reaches the level of threatening death and destruction if one's ideas are not accepted.

Replies from: waitingforgodel
comment by waitingforgodel · 2010-09-23T16:14:35.976Z · LW(p) · GW(p)

Fair point. Have you read the whole thread especially the Wei Dai bit?

It could be I'm wrong. It could also be that EY (who is also human) will be wrong if he decides to censor another LW post/comment because my commitment wasn't in place.

Unless you're offering a solution, I'd rather put my money on the table than stick to FUDing in the corner :p

Replies from: Emile
comment by Emile · 2010-09-24T15:25:54.977Z · LW(p) · GW(p)

Yes, I read the whole thread (and the banned doubleplus ungood post by Roko).

I wouldn't mind if you were putting "your money" on the table. What I mind is threatening to take act with the goal of reducing mankind's chances of survival. That's not "your money".

If you had just threatened to stop donating money to SIAI (do you donate?), no problemo. Whether that action has an impact on existential risk is unclear; my problem isn't doing actions that might have increase existential risk, it's doing actions whose purpose is to increase existential risk. Or even brainstorming about those.

Imagine that Omega guy came down from the sky and gave each human on earth a device with a button. The button can only be pushed once, and has a one-in-a-million chance of making the sun go Nova (and the button only works for the rightful owner).

What would you think of someone who publicly threatened to push his device's button If the US elected the wrong president? If the government didn't give him $500 a month? If some website didn't adopt the moderation policy he prefered? Or someone brainstorming about how to build a device like that, with the above uses in mind?

Replies from: kodos96
comment by kodos96 · 2010-09-24T18:44:35.894Z · LW(p) · GW(p)

What if somebody else had a similar button, but with 1 in 100,000 probability. Would it be ok to threaten to push your 1 in a million button if the other guy pressed his 1 in 100,000 button? If you had reason to believe that the other guy, for some reason, would take your threat seriously, but wasn't taking the threat of his own button seriously?

OK, that got kind of convoluted, but do you see what I'm saying?

Replies from: Emile
comment by Emile · 2010-09-24T19:22:33.567Z · LW(p) · GW(p)

If you were sufficiently certain that the situation is as you describe (least convenient world), yes, it would be OK to threaten (and carry out the threat). If however you obtained the information through a bug-ridden device that is known to be biased towards overconfidence in this kind of situations - then such threats would be immoral. And I think most imaginable real-world situations fall in the second category.

Replies from: kodos96
comment by kodos96 · 2010-09-24T19:32:18.815Z · LW(p) · GW(p)

I think the only thing we disagree on then is the word "immoral". I would say that it may very well be incorrect, but not immoral, so long as he is being sincere in his motivations.

ETA: ok after thinking about it some more, I guess I could see how it might be considered immoral (in the least convenient world to the point of view I'm arguing). I guess it kind of depends on the specifics of what's going on inside his head, which I'm of course not privy to.

Replies from: Emile
comment by Emile · 2010-09-24T20:57:09.937Z · LW(p) · GW(p)

I'm not sure it would count as "immoral", guess it also depends of how you define the terms.

I see this as a case of the more general "does the end justify the means?". In principle, the end do justify the means, if you're sufficiently confident that those means will indeed result in that end, and that the end is really valuable. In practice, the human brain is biased towards finding ends to justify means that just happen to bring power or prestige to the human in question. In fact, most people doing anything widely considered "bad" can come up with a good story about how it's kinda justified.

So, part of what makes a human moral is willingness to correct this, to listen to the voice of doubt, or at least to consider that one may be wrong, especially when taking action that might harm others.

comment by Will_Newsome · 2010-07-20T23:15:09.462Z · LW(p) · GW(p)

Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?

If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariably (out of 5 or so occasions when I brought it up) replied with something along the lines of 'Well I bet he/she is secretly horribly depressed / a horrible person / full of ennui / not well-liked by friends and family". This is kind of the opposite of the halo effect. It could be that this bias only occurs when someone is asked to evaluate the overall goodness of someone who they themselves have not gotten the chance to respect or see as high status.

Anyway, I know Eliezer had a post called 'competent elites' or summat along these lines, but I'm not sure if this effect is a previously researched bias I'm half-remembering or if it's just a natural consequence of some other biases (e.g. just world bias).

Added: Alternative hypothesis that is more consistent with the halo effect and physical attractiveness stereotype data: my friends are themselves exceptionally physically attractive and competent but have compensatory personal flaws or depression or whatever, and are thus generalizing from one or two examples when assuming that others that share similar traits as themselves would also have such problems. I think this is the more likely of my two current hypotheses, as my friends are exceptionally awesome as well as exceptionally angsty. Aspiring rationalists! Empiricists and theorists needed! Do you have data or alternative hypotheses?

Replies from: None, NancyLebovitz, JamesPfeiffer, Document, SilasBarta
comment by [deleted] · 2010-07-26T14:33:07.921Z · LW(p) · GW(p)

It may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-27T07:41:07.565Z · LW(p) · GW(p)

Wow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?

Replies from: None
comment by [deleted] · 2010-07-27T16:51:20.380Z · LW(p) · GW(p)

I've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it.

Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.

comment by NancyLebovitz · 2010-07-21T09:24:46.213Z · LW(p) · GW(p)

Could it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.

comment by JamesPfeiffer · 2010-07-23T03:33:29.336Z · LW(p) · GW(p)

Is this actually incorrect, though? As far as I know, people have problems and inadequacies. When they solve them, they move on to worrying about other things. It's probably a safe bet that the awesome people you're describing do as well.

What probably is wrong is that general awesomeness makes hidden bad stuff more likely.

comment by Document · 2010-09-24T08:12:52.814Z · LW(p) · GW(p)

Possibly a form of the just-world fallacy.

comment by SilasBarta · 2010-07-20T23:34:03.459Z · LW(p) · GW(p)

Given that there's the halo effect (that you mention) plus the affect heuristic, it seems that if there's a bias, it goes the other way - people tend to think all positive attributes clump together.

If both effects exist, that would cast doubt on whether it counts as a bias at all, as the direction of the error is not consistently one way. (Right?)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-20T23:38:14.417Z · LW(p) · GW(p)

If both effects exist, that would cast doubt on whether it counts as a bias at all, as the direction of the error is not consistently one way. (Right?)

Will's remark suggests that the biases exist in different circumstances. If I'm following Will, then the halo effect occurs when people have already interacted with impressive individuals, whereas Will's reported effect occurs only when people are hearing about an impressive individual in a second-hand or third-hand way.

comment by Rain · 2010-07-14T22:53:01.833Z · LW(p) · GW(p)

Day-to-day question:

I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.

  1. Action: Capture the frog and put it in the woods out back. Cost: ~10 seconds to capture, ~2 minutes to put into the woods, getting slimy frog on my hands and dew on my shoes. Benefit: frog potentially survives.
  2. Action: Capture the frog and put it in the dew-covered grass out front. Cost: ~10 seconds to capture, ~20 seconds to put into the grass, getting slimy frog on my hands. Benefit: no frog corpses in the stairwell after I get home from work, and it has a possibility of surviving.
  3. Action: Either of the above, but also taking a glass of warm water and pouring it over the frog to clean off the dust and cobwebs from hopping around the stairwell. Cost: ~1 minute to get a glass of water, consumption of resources to ensure it's warm enough not to cause harm, ~10 seconds of cleaning the frog. Benefit: makes frog look less deathly, potentially increases chances at survival.
  4. Action: Leave the frog in the stairwell. Cost: slight emotional guilt at not helping the frog, slight advance of the current human-caused mass extinction event. Benefit: no action required.
  5. Action: As above, but once the frog is dead, position it in the stairwell in such a way as to be aesthetically pleasing, as small ceramic animals sometimes are. Cost: touching a dead frog, being seen as obviously creepy or weird. Benefit: cute little squatting frog in the shade under the stairwell every morning.

What would you do, why, and how long would you keep doing it?

Replies from: Eliezer_Yudkowsky, Nisan, NancyLebovitz, cousin_it, mattnewport
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-21T09:50:27.565Z · LW(p) · GW(p)

I don't consider frogs to be objects of moral worth.

Replies from: XiXiDu, VNKKET, CarlShulman, multifoliaterose, Richard_Kennaway, CronoDAS, cousin_it
comment by XiXiDu · 2010-07-22T10:35:46.582Z · LW(p) · GW(p)

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

-- David Pearce via Facebook

comment by VNKKET · 2010-07-21T21:28:45.490Z · LW(p) · GW(p)

Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-22T21:04:11.838Z · LW(p) · GW(p)

"Frogs have subjective experience" is the biggy, there's a number of other things I already know myself to be confused about which impact on that, and so I don't know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog's mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call "capable of experiencing pain", as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer.

A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like "jump blue box". (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)

Replies from: DanielVarga, Utilitarian, XiXiDu
comment by DanielVarga · 2010-07-25T07:14:12.427Z · LW(p) · GW(p)

I am very surprised that you suggest that "having subjective experience" is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object's subjective experience is similar to Observer's own.

Replies from: Blueberry
comment by Blueberry · 2010-07-29T00:15:57.611Z · LW(p) · GW(p)

I thought it is consensus opinion here that it is not.

Links? I'd be interested in seeing what people on LW thought about this, if it's been discussed before. I can understand the yes/no position, or the idea that there's a blurry line somewhere between thermostats and humans, but I don't understand what you mean about the Observer and Object. The Observer in your example has subjective experience?

comment by Utilitarian · 2010-07-23T08:19:37.056Z · LW(p) · GW(p)

I like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well.

But I'm curious: What does ability to learn simple grammar have to do with subjective experience?

comment by XiXiDu · 2010-07-23T10:50:39.291Z · LW(p) · GW(p)

We're not looking for objective experience, thus we're simply looking for experience. If we now define 'a sense of one's own existence' as the experience of self-awareness, i.e. consciousness, and if we also regard unconscious experience as unworthy, we're left with consciousness.

Now since we can not define consciousness, we need a working definition. What are some important aspects of consciousness? 'Thinking', which requires 'knowledge' (data), is not the operative point between being an iPhone and being human. It's information processing after all. So what do we mean by unconscious, opposed to consciousness decision making? It's about deliberate, purposeful (goal-oriented) adaption. Thus to be consciousness is to be able to shape your environment in a way that it suits your volition.

  • The ability to define a system within the environment in which it is embedded to be yourself.
  • To be goal-oriented
  • The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, does trump the environmental influence on the defined system. (more)

How could this help with the frog-dilemma? Are frogs consciousness?

  • Are there signs of active environmental adaption by the frog-society as indicated by behavioral variability?
  • To what extent is frog behavior predictable?

That is, we have to fathom the extent of active adaption of the environment by frogs opposed to passive adaption of frogs by the the environment. Further, one needs to test the frogs ability of deliberate, spontaneous behavior given environmental (experimental) stimuli and see if frogs can evade, i.e. action vs reaction.

P.S. No attempt at a solution, just some quick thoughts I wanted to write down for clearance and possible feedback.

comment by CarlShulman · 2010-07-21T18:34:51.216Z · LW(p) · GW(p)

I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?

Replies from: Vladimir_Nesov, Bongo, Blueberry, steven0461
comment by Vladimir_Nesov · 2010-07-21T19:25:54.910Z · LW(p) · GW(p)

You'd need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck.

Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.

Replies from: Utilitarian
comment by Utilitarian · 2010-07-21T23:26:32.369Z · LW(p) · GW(p)

the lives of the cockroaches are irrelevant

I'm not so sure. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.

comment by Bongo · 2010-07-24T11:50:35.768Z · LW(p) · GW(p)

Hopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.

comment by Blueberry · 2010-07-21T19:36:19.006Z · LW(p) · GW(p)

Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs?

Depends. Would that make it harder to get frog legs?

comment by steven0461 · 2010-07-21T18:44:40.576Z · LW(p) · GW(p)

Same questions to you, but with "rocks" for "frogs".

Eliezer didn't say he was 100% sure frogs weren't objects of moral worth, nor is it a priori unreasonable to believe there exists a sharp cutoff without knowing where it is.

comment by multifoliaterose · 2010-07-21T10:19:39.675Z · LW(p) · GW(p)

Why not?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-21T11:40:38.752Z · LW(p) · GW(p)

Seconded, and how do you (Eliezer) rate other creatures on the Great Chain of Being?

comment by Richard_Kennaway · 2010-07-21T13:13:56.104Z · LW(p) · GW(p)

Would you save a stranded frog, though?

comment by CronoDAS · 2010-07-22T16:25:38.828Z · LW(p) · GW(p)

What about dogs?

comment by cousin_it · 2010-07-21T12:36:07.774Z · LW(p) · GW(p)

Yeah, trying to save the world does that to you.

ETA (May 2012): wow, I can't understand what prompted me to write a comment like this. Sorry.

Replies from: Rain
comment by Rain · 2010-07-21T12:48:15.715Z · LW(p) · GW(p)

Axiom: The world is worth saving.
Fact: Frogs are part of the world.
Inference: Frogs are worth saving in proportion to their measure and effect on the world.
Query: Is life worth living if all you do is save more of it?

Replies from: cousin_it, Richard_Kennaway, JoshuaZ
comment by cousin_it · 2010-07-21T12:51:48.225Z · LW(p) · GW(p)

I don't know. I'm not Eliezer. I'd save the frogs because it's fun, not because of some theory.

comment by Richard_Kennaway · 2010-07-21T13:13:34.376Z · LW(p) · GW(p)

Is life worth living if all you do is save more of it?

As a matter of practical human psychology, no. People cannot just give and give and get nothing back from it but self-generated warm fuzzies, a score kept in your head by rules of your own that no-one else knows or cares about. You can do some of that, but if that's all you do, you just get drained and burned out.

comment by JoshuaZ · 2010-07-21T13:27:29.318Z · LW(p) · GW(p)

Axiom: The world is worth saving. Fact: Frogs are part of the world. Inference: Frogs are worth saving in proportion to their measure and effect on the world.

Three does not follow from 1. It doesn't follow that the world is more likely to be saved if I save frogs. It also doesn't follow that saving frogs is the most efficient use of my time if I'm going to spend time saving the world. I could for example use that time to help reduce existential risk factors for everyone, which would happen to incidentally reduce the risk to frogs.

Replies from: Rain
comment by Rain · 2010-07-21T13:53:03.376Z · LW(p) · GW(p)

I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you've deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread.

And stating that there are "more efficient" ways of saving frogs than directly saving frogs does not refute the initial inference that frogs are worth saving in proportion to their measure and effect on the world. Perhaps you are really saying "their proportion and measure is low enough as to make it not worth the time to stoop and pick them up"? Which I might also agree with.

But in my latest query, I was trying to point out that "a safe Singularity is a more efficient means of achieving goal X" or "a well thought out existential risk reduction project is a more efficient means of saving Y" can be used as a fully general counterargument, and I was wondering if people really believe they trump all other actions one might take.

Replies from: Utilitarian
comment by Utilitarian · 2010-07-21T17:56:53.653Z · LW(p) · GW(p)

I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.

Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds of suffering wild animals is even better. So let me do that instead.")

The above point applies primarily to those who find themselves devoting less effort to charitable projects than they could. For people who already come close to burning themselves out by their dedication to efficient causes, taking on additional burdens to reduce just a bit more suffering is probably not a good idea.

Replies from: Blueberry
comment by Blueberry · 2010-07-21T19:40:47.069Z · LW(p) · GW(p)

At the very least, it seems the pain endured by the frogs is terrible, no?

Maybe so, but the question is why we should care.

While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing

If only for the cheap signaling value.

Replies from: Utilitarian
comment by Utilitarian · 2010-07-21T23:17:07.297Z · LW(p) · GW(p)

If only for the cheap signaling value.

My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)

comment by Nisan · 2010-07-17T17:53:00.556Z · LW(p) · GW(p)

2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds.

If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help.

Also, wash your hands before eating.

Replies from: jimrandomh
comment by jimrandomh · 2010-07-17T18:14:09.653Z · LW(p) · GW(p)

If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help.

The goal of helping frogs is to gain fuzzies, not utilons. Thinking about all the frogs that you don't have the opportunity to help would mean losing those fuzzies.

Replies from: Rain
comment by Rain · 2010-07-19T14:48:47.114Z · LW(p) · GW(p)

There's no utility in saving (animal) life? Or is that only for this particular instance?

Edit 20-Jun-2014: Frogs saved since my original post: 21.5. Frogs I've failed to save: 23.5.

comment by NancyLebovitz · 2010-07-21T08:40:24.382Z · LW(p) · GW(p)

How often do you find frogs in the stairwell? Could it make sense to carry something (a plastic bag?) to pick up the frog with so that you don't get slime on your hands?

If it were me, I think I'd go with plastic bag or other hand cover, possibly have room temperature water with me (probably good enough for frogs, and I'm willing to drink the stuff), and put the frog on the lawn unless I'm in the mood for a bit of a walk and seeing the woods.

I have no doubt that I would habitually wonder whether there are weird events in people's lives which are the result of interventions by incomprehensibly powerful beings.

comment by cousin_it · 2010-07-21T09:20:43.574Z · LW(p) · GW(p)

Once per two weeks? I would go with 1+3 for maximum fuzzies. If the frog is alive, that is.

comment by mattnewport · 2010-07-14T22:58:54.422Z · LW(p) · GW(p)

Have you collected any data on how often the frog would find its own way out if left alone? Setting up an experiment that could reliably distinguish this from it being eaten by a bird or moved by another passing human might be tricky.

Replies from: Rain
comment by Rain · 2010-07-14T23:02:38.656Z · LW(p) · GW(p)

It is impossible for the frogs to escape from the stairwell without human intervention. The stairs are fairly high and only slabs of concrete with air below them. The most I've seen a frog succeed at is making it halfway underneath the door to the maintenance area also located in the stairwell. I have never observed another human helping a frog.

From my memory (not the best experimental apparatus, but it is what I have), the ratio of frog corpses after work to live, unrescued frogs in the morning has thus far been 1:1.

Replies from: mattnewport
comment by mattnewport · 2010-07-14T23:09:00.782Z · LW(p) · GW(p)

Is there any possibility of constructing some kind of frog barrier at the top of the stairwell or amphibian escape ramp (PVC pipe?) or does the layout of the public space make that impractical? My preference would be for an engineering solution if I hypothetically valued frog survival highly. A web-cam activated frog elevator would be entertaining but probably overkill.

Of course this may not be optimal if the warm-fuzzies from individual frog-assisting episodes are of greater expected utility than automated out-of-sight out-of-mind frog moving machinery.

Replies from: byrnema, Rain
comment by byrnema · 2010-07-21T12:03:32.339Z · LW(p) · GW(p)

I think the warm-fuzzies from an engineering solution could be quite significant.

Throughout the day, if I had encountered a live or dead frog in my stairwell, my mind might return to the subject of frogs caught in stairwells several times. If I had saved the frog by hand, I would feel some satisfaction (tinged with some cynicism, see here) but also anxiety that I could not always be there for every frog. With the engineering solution, I would feel proud about human ingenuity and happy about all the potential frogs I could be saving any moment. Lots and lots of warm fuzzies.

comment by Rain · 2010-07-14T23:14:05.615Z · LW(p) · GW(p)

I would not be able to alter the stairwell in such a fashion. This is a commercial apartment complex with many other people living in the building. The mailboxes are also in this bottom-level stairwell, meaning it gets quite a bit of traffic aside from myself. I reiterate I have never seen another person help a frog, and the ratio has always been 1:1.

Other buildings in the apartment complex apparently also have this problem, as I asked someone who lived in another building if they'd ever seen a dead frog, and they said yes, on occasion, they see them when getting the mail. I did not ask if they saw any live frogs. There are at least 4 identical stairwells per building, and at least 6 buildings adjacent to a pond. This adds to my feelings of "this problem is too big for me."

Replies from: mattnewport, whpearson
comment by mattnewport · 2010-07-14T23:30:31.594Z · LW(p) · GW(p)

When I was a young child my dad was building an extension on our house. He dug a deep trench for the foundations and one morning I came down to find what my hazy memory suggests were thousands of baby frogs from a nearby lake. I spent some time ferrying buckets of tiny frogs from the trench to the top of our driveway in a bid to save them from their fate.

The following morning on the way to school I passed thousands of flat baby frogs on the road. I believe this early lesson may have inured me to the starkly beautiful viciousness of nature.

comment by whpearson · 2010-07-14T23:49:51.334Z · LW(p) · GW(p)

How wide is the entrance to the stairwell? Could you add a ramp at the sides so the frogs have a chance of getting up without inconveniencing the other people? It would also enable people with bikes to wheel them in easier (if any of the residents store bikes in their flats).

Is there any group/person who represents the views of the residents of the building(s)? Perhaps write a letter to them with the best solution to the problem you think of and let them sort it out?

Otherwise I would probably devise some form of frog capture device, so I could quickly and cleanly remove frogs (alive or dead). Some form of hinged box controlled by a line on a pole.

Edit: Err, read the frequency. Once every 2 weeks doesn't justify taking a tool with you, Probably just take some gloves and rescue the frogs.

comment by whpearson · 2010-07-11T18:27:58.034Z · LW(p) · GW(p)

How facts Backfire

Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information.

There are a number of ways you can run with this article. It is interesting seeing it in the major press. It is also a little ironic that it is presenting facts to try and overturn an opinion (that information cannot be good for trying to overturn an opinion).

In terms of existential risk and thinking better in general. Obviously sometimes facts can overturn opinions but it makes me wonder, where is the organisation that uses non-fact based methods to sway opinion about existential risk. It would make sense if they were seperate, the fact based organisations (SIAI, FHI) need to be honest so that people that are fact-phillic to their message will trust them. I tend to ignore the fact-phobic (with respect to existential risk) people. But if it became sufficiently clear that foom style AI was possible, engineering society would become necessary.

Replies from: Kaj_Sotala, JoshuaZ
comment by Kaj_Sotala · 2010-07-12T00:05:31.494Z · LW(p) · GW(p)

Interesting tidbit from the article:

One avenue may involve self-esteem. Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t.

I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Replies from: whpearson, MBlume, cupholder, twanvl, Christian_Szegedy
comment by whpearson · 2010-07-12T23:41:51.847Z · LW(p) · GW(p)

I'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation.

Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered.

We really need some students to do some tests upon, or a better way of searching psych research than google.

comment by MBlume · 2010-07-12T23:31:21.453Z · LW(p) · GW(p)

Data point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done.

I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/

Anyhow, point is, none of the above are what you'd call gentle.

ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"

comment by cupholder · 2010-07-12T23:52:11.551Z · LW(p) · GW(p)

These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.

comment by twanvl · 2010-07-13T17:11:21.554Z · LW(p) · GW(p)

I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen.

I believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.

comment by Christian_Szegedy · 2010-07-13T00:30:37.311Z · LW(p) · GW(p)

I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important.

On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)

comment by JoshuaZ · 2010-07-14T13:58:04.273Z · LW(p) · GW(p)

The primary study in question is here. I haven't been able to locate online a copy of the study about self-esteem and corrections.

comment by NancyLebovitz · 2010-08-06T14:00:27.785Z · LW(p) · GW(p)

The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.

I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".

I don't have specifics for the 9/11 bombers.

comment by XiXiDu · 2010-08-05T15:25:45.027Z · LW(p) · GW(p)

Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.

It's not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to guarantee their preservation. You just have to decide where to draw the line here. But I still think that given extreme possibilities which cause extreme emotions the question about potential extreme measures is a valid one.

I also note that when discussing potentially contentious subjects you need to be triply careful to be clear and impossible to credibly misunderstand.

That is true, thanks. Although I like to assume that this is a forum where people will inquire meaning before condemnation. I guess my own comments disprove this. But I take it lightly as I'm not writing a dissertation here but merely a comment to a comment in a open thread.

Paraphrase of the middle: I value freedom of ideas above the unintentional suffering of a few people reading named ideas. Some people on LW value the potential suffering of people by rouge AI above the freedom of ideas which might cause named suffering.

Maybe I do too, I'll tell you once I made up my mind. My intention of starting this whole discussion was, as stated several times, the potential danger posed by people trying to avoid potential danger given unfriendly AI (which might be a rationalization.)

P.S. Your comment absolutely hits the nail on the head. I'm an idiot sometimes. And really tired today too...argh, excuses again!

comment by [deleted] · 2010-07-29T18:03:09.186Z · LW(p) · GW(p)

Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).

I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who have been promoted offer to give feedback to a few writers who are struggling to develop the necessary rigour etc by providing a faster feedback cycle, the ability to redraft an article rather than having to start totally afresh and just general advice.

Some people may not feel that this is very beneficial - there's no need for writing to LW to be made easier (in fact, possibly the opposite) but first off, I'm not talking about making writing for LW easier, I'm talking about making more of the writing of a higher quality. And secondly, I certainly learn a lot better given a chance to interact on that extra level. I think learning to write at an LW level is an excellent way of achieving LW aim of helping people to think at that level.

I'm a long time lurker but I haven't even really commented before because I find it hard to jump to that next level of understanding that enables me to communicate anything of value. I wonder if there are others who feel the same or a similar way.

Good idea? Bad idea?

Replies from: None, WrongBot, cupholder
comment by [deleted] · 2010-07-29T19:52:42.793Z · LW(p) · GW(p)

We could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work.

Possibilities:

  1. Significantly raise the minimum karma level.

  2. An editorial system: a more "advanced" member has to read your post before it becomes top-level.

  3. A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette.

  4. A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit.

A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.

Replies from: jimrandomh, xamdam
comment by jimrandomh · 2010-07-29T21:03:23.415Z · LW(p) · GW(p)

Kuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted).

Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.

We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.

Replies from: cupholder
comment by cupholder · 2010-07-30T00:43:57.754Z · LW(p) · GW(p)

Enthusiastically seconded.

The only change I'd make is to hide editorial comments when the post leaves editing (instead of deleting them), with a toggle option for logged-in users to carry on viewing them.

Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.

I think it is. There are several tricks we could use to give busy-smart people more of a chance to edit posts.

On Kuro5hin, if I remember right, posts left the editing queue automatically after 24 hours, either getting posted or kicked into the bit bucket. Also, users could vote to push the story out of the queue early. If Less Wrong reimplemented this system, we could raise the threshold for voting a story out of editing early, or remove the option entirely. We could even lengthen the period it spends in the editing stage. (This would also have the advantage of filtering out impatient people who couldn't wait 3 days or whatever for their story to post.)

LW's also just got a much smaller troll ratio than Kuro5hin did, which would help a lot.

Replies from: None
comment by [deleted] · 2010-07-30T08:09:25.265Z · LW(p) · GW(p)

It seems like there's at least some interesting in doing something to deal with helping people to develop posting skills through a means other than simply writing lots of articles and bombarding the community with them. The editorial system seems like it has a lot of promising aspects.

The main thing is, it seems more valuable to implement a weak system than to simply talk about implementing a stronger system so whether the editorial system is the best that can be done depends on whether the people in charge of the community are interested in implementing it.

If they turn out to not be, I still wonder whether there's a few people out there that can volunteer to help make posts better and a few people who can volunteer to not bombard LW but instead to develop their skills in a quieter way (nb: that doesn't refer to anyone in particular except, potentially, myself). Personally, I still think that would be useful, even if suboptimal.

Does the lack of a response from EY imply that he's not interested in that sort of change and, if so, is it EY who would be the one to make the decision?

Replies from: rhollerith_dot_com, NancyLebovitz, cupholder
comment by RHollerith (rhollerith_dot_com) · 2010-07-30T12:50:39.901Z · LW(p) · GW(p)

EY has stated in the past that the reason most suggestions do not result in a change in the web site is that no programmer (or no programmer that EY and EY's agents trust) is available to make the change.

Also, I think he reads only a fraction of LW these months.

comment by NancyLebovitz · 2010-07-30T10:09:15.603Z · LW(p) · GW(p)

Meanwhile, it would be probably be worthwhile if people would write about any improvement they've made in their ability to think and to convey their ideas, whether it's deliberate or the result of being in useful communities.

I'm not sure that I've made improvements myself-- I think my strategy (which it took a while to make conscious) of writing for the my past self who didn't have the current insight has served me pretty well-- that and General Semantics (a basic understanding that the map isn't the territory).

If I were writing for a general audience, I think I'd need to learn about appropriate levels of redundancy.

comment by cupholder · 2010-07-30T10:19:57.004Z · LW(p) · GW(p)

Does the lack of a response from EY imply that he's not interested in that sort of change and, if so, is it EY who would be the one to make the decision?

I wouldn't read anything into the lack of response, EY often doesn't comment on meta-discussion. In fact I'd guess there's a good chance he hasn't even seen this thread!

I guess it might be worth raising this in the Spring 2010 meta-thread? Come to think of it, it's been 4+ months since that meta thread was started - it may even be worth someone posting a Summer 2010 meta-thread with this as a topic starter.

Replies from: None
comment by [deleted] · 2010-07-30T11:01:52.373Z · LW(p) · GW(p)

Okay then. Well I don't have the karma to start a thread so I'll leave it to someone who has if they think it's worth while.

If nothing else, I wondered about the possibility of doing a top level post expressly for this purpose. So people could post an article with the idea being that comments in response would be aimed at improving it, rather than just general comments. And the further understanding that the original article would then be edited and people could comment on this new one. If the post got a good enough response after a few drafts, it could then be posted at the top level. Otherwise, it would be a good lesson anyway. It would also be less cluttered because it would all be within that purpose made, top level post.

Replies from: Blueberry
comment by Blueberry · 2010-07-31T00:57:02.057Z · LW(p) · GW(p)

Sounds like a good idea. The Open Thread could be (and has been) used for this, but it may be worthwhile to set up a thread specifically for constructive criticism on draft articles.

comment by xamdam · 2010-07-30T18:21:24.823Z · LW(p) · GW(p)

Significantly raise the minimum karma level.

Another technical solution. Not trivial to implement, but also contains significant side benefits.

  • Find some subset of sequences and other highly ranked posts that are "super-core" and has large consensus not just in karma, but also in agreement by high-karma members (say top ten).
  • Create a multiple choice test and implement it online, which is external technologies exist for already I am sure.

Some karma + passing test gets top posting privileges.

I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.

comment by WrongBot · 2010-07-29T19:07:25.245Z · LW(p) · GW(p)

Is there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.)

I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though.

(I think this is a great idea.)

Replies from: Larks, None
comment by Larks · 2010-07-30T22:35:34.704Z · LW(p) · GW(p)

Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was.

I didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.

Replies from: WrongBot
comment by WrongBot · 2010-07-31T00:22:34.854Z · LW(p) · GW(p)

Agreed, and that is exactly the sort of factor I was alluding to in my parenthetical.

comment by [deleted] · 2010-07-29T19:22:07.172Z · LW(p) · GW(p)

I suppose there's a few options including: See who's willing to run workshops and then once that's known, people can choose whether to join or not. If none of the top contributors could be convinced to run them then they may still be useful for people of a lower level of post writing ability (which I suspect is where I am, at the moment). The other thing is, even regardless of who ran the workshops, the ability to get faster feedback and to redraft gives a chance to develop an article more thoroughly before posting it properly and may give a sense of where improvements can be made and where the gaps in thinking and writing are.

But I guess that questions like that are secondary to the question of whether enough people think it's a good enough idea and whether anyone would be willing to run workshops at all.

comment by cupholder · 2010-07-30T00:15:33.734Z · LW(p) · GW(p)

Upvoted for raising the topic, but the approach I'd prefer is jimrandomh's suggestion of having all posts pass through an editorial stage before being posted 'for real.'

comment by NancyLebovitz · 2010-07-25T16:16:48.020Z · LW(p) · GW(p)

Rationality applied to swimming

The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.

Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.

I'm posting this partly because it's always a pleasure to see rationality, partly because the most recent chapter of Methods of Rationality reminded me of it, and mostly because it's a fine example of clue acquisition.

comment by NancyLebovitz · 2010-07-23T21:15:55.671Z · LW(p) · GW(p)

Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.

Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.

Replies from: RobinZ
comment by RobinZ · 2010-07-24T00:25:48.137Z · LW(p) · GW(p)

That is amazingly interesting.

comment by JoshuaZ · 2010-07-23T03:33:27.599Z · LW(p) · GW(p)

Two things of interest to Less Wrong:

First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may still be correlation v. causation issues, but tentatively it looks like having fewer cognitive biases and having better default rationality actually makes one less religious.

The second matter of interest to LW: Today's featured article on the English Wikipedia is the article on confirmation bias.

comment by Richard_Kennaway · 2010-07-12T11:02:04.864Z · LW(p) · GW(p)

The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.

Now try the sequel (another YouTube video).

Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.

More here.

comment by ata · 2010-07-24T04:53:43.696Z · LW(p) · GW(p)

Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling enough to get people to really comprehend it, care about it, and do what they can about it. (Geniuses viewing it might decide to go into existential risk reduction when they might otherwise have turned to string theory; it could raise awareness so that existential risk reduction is seen more widely as an important and respectable area of research; it could attract donors to organizations like FHI, SIAI, Foresight, and Lifeboat; etc.)

Replies from: Kevin
comment by Kevin · 2010-07-24T05:58:11.415Z · LW(p) · GW(p)

Sure, I've been thinking about it, I need $10MM to produce it though.

comment by cerebus · 2010-07-17T14:11:24.218Z · LW(p) · GW(p)

Nobel Laureate Jean-Marie Lehn is a transhumanist.

We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face these problems). Some people will jump up at what I am going to say now but I think at some point of time we will have to change our brains.

comment by Roko · 2010-07-11T12:38:39.241Z · LW(p) · GW(p)

"Therefore, “Hostile Wife Phenomenon” is actually “Distant, socially dysfunctional Husband Syndrome” which manifests frequently among cryonics proponents. As a coping mechanism, they project (!) their disorder onto their wives and women in general to justify their continued obsessions and emotional failings."

Assorted hilarious anti-cryonics comments on the accelerating future thread

Replies from: nhamann
comment by nhamann · 2010-07-12T20:17:45.293Z · LW(p) · GW(p)

If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.

The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam", a pseudoscience, science fiction (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.

All in all, a delightful cornucopia of irrationality.

ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received.

Also, check out this hilarious description of Robin Hanson from a commenter there:

The husband in that article sounded like an annoying nerd. Would I want to be frozen and wake up in a world run by these annoying douchebags? His 'futurecracy' idea seems idiotic (and also unworkable)

I guess that the fatal problem with cryonics is all the freaking nerds interested in it.

Replies from: RobinZ, Roko
comment by RobinZ · 2010-07-12T20:43:38.743Z · LW(p) · GW(p)

The responses are interesting. I think this is the most helpful to my understanding:

I'm getting sort of tired arguing about the futility of current cryogenics, so I won't.

I will state that, if my spouse fell for some sort of afterlife scam that cost tens of thousands of dollars, I WOULD be angry.

"This is not a hobby or conversation piece,” he wrote in 1968, adding, “it is the struggle for survival. Drive a used car if the cost of a new one interferes. Divorce your wife if she will not cooperate.

Scientology urges the exact same thing.
posted by muddgirl at 5:52 PM on July 11

I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.

Replies from: None
comment by [deleted] · 2010-07-13T00:22:13.821Z · LW(p) · GW(p)

um... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility.

We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out.

Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.

Replies from: Christian_Szegedy, lsparrish, EStokes, RobinZ, jimrandomh, JoshuaZ
comment by Christian_Szegedy · 2010-07-13T00:40:58.408Z · LW(p) · GW(p)

There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.

I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.

However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.

Replies from: None
comment by [deleted] · 2010-07-13T00:50:42.186Z · LW(p) · GW(p)

Don't read too much into my fusion analogy; you're right that cryonics is different than fusion.

Replies from: JoshuaZ, Christian_Szegedy
comment by JoshuaZ · 2010-07-13T01:04:40.065Z · LW(p) · GW(p)

May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there's nothing fusing.

comment by Christian_Szegedy · 2010-07-13T01:01:51.569Z · LW(p) · GW(p)

... and different to almost any other unproven technology (for the exact same reason).

comment by lsparrish · 2010-07-13T00:33:32.755Z · LW(p) · GW(p)

That's ok, it's a skepticism friendly site as well.

I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-13T00:39:12.234Z · LW(p) · GW(p)

I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.

Well, if you think there's a decent probability for cryonics to turn out then investing in pretty much anything long-term becomes much more likely to be personally beneficial. Indeed, research in general increases the probability that cryonics will end up working (since it reduces the chance of catastrophic events or social problems and the like occurring before the revival technology is reached). The problem with cold fusion is that it is extremely unlikely to work given the data we have. I'd estimate that it is orders of magnitude more likely that say Etale cohomolgy turns out to have a practical application than it is that cold fusion will turn out to function. (I'm picking Etale cohomology as an example because it is pretty but very abstract math that as far as I am aware has no applications and seems very unlikely to have any applications for the foreseeable future).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-13T02:56:14.706Z · LW(p) · GW(p)

You don't think it likely that etale cohomology will be applied to cryptography? I'm sure there are papers already claiming to apply it, but I wouldn't want to evaluate them. Some people describe it as part of Schoof's algorithm, but I'm not sure that's fair. (or maybe you count elliptic curve cryptography as whimsy - it won't survive quantum computers any longer than rsa)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-13T03:20:31.644Z · LW(p) · GW(p)

Yeah, ok. That may have been a bad example, or it may be an indication that everything gets some application. I don't know how it relates to Schoof's algorithm. It isn't as far as I'm aware used in the algorithm or in the correctness proof but this is stretching my knowledge base. I don't have enough expertise to evaluate any claims about applying Etale cohomology to cryptography.

I'm not sure what to replace that example with. Stupid cryptographers going and making my field actually useful to people.

comment by EStokes · 2010-07-17T00:08:08.496Z · LW(p) · GW(p)

There's always a way to estimate how likely something is, even if it's not a very accurate prediction. And mere used like seems kinda like a dark side word, if you'll excuse me.

Cryonics is theoretically possible, in that it isn't inconsistant with science/physics as we know it so far. I can't really delve into this part much, as I don't know anything about cold fusion and thus can't understand the comparison properly, but it sounds as if it might be inconsistant with physics?

Possibly relevant: Is Molecular Nanotechnology Scientific?

Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion.

And this is just the impression I get, but it sounds like you're being a contrarian contrarian. I think it's your last sentence: it made me think of Lonely Dissent.

Replies from: None
comment by [deleted] · 2010-07-17T00:54:56.690Z · LW(p) · GW(p)

The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist's reasoning than criticizing a religious person's. It's easy to criticize people who tell you they don't mind. The result is that it's those who need constructive criticism the most who get the least. I'll admit I fall into this trap sometimes.

Replies from: RobinZ
comment by RobinZ · 2010-07-23T18:19:26.728Z · LW(p) · GW(p)

(belated reply:) You're right about the openness to criticism part, but there's another thing that goes with it: the communities that value critical thinking will respond to criticism by thinking more, and on occasion this will literally lead to the consensus reversing on the specific question. Without a strong commitment to rationality, however, frequently criticism is met by intransigence instead, even when it concerns the idea rather than the person.

Yes, people caught in anti-epistemological binds get less criticism - but they usually don't listen to criticism, either. Dealing with these is an unsolved problem.

comment by RobinZ · 2010-07-13T00:52:43.090Z · LW(p) · GW(p)

Well, right off the bat, there's a difference between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.

comment by jimrandomh · 2010-07-13T12:57:06.451Z · LW(p) · GW(p)

But with no way to estimate how likely it is, you're blowing money on mere possibility.

You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.

Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.

Replies from: FAWS
comment by FAWS · 2010-07-13T14:19:31.250Z · LW(p) · GW(p)

I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.

Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.

Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T00:32:26.884Z · LW(p) · GW(p)

I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations?

I'm guessing it's rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one's preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.

Replies from: FAWS
comment by FAWS · 2010-07-16T11:21:12.615Z · LW(p) · GW(p)

Once you are beyond the most elementary of statements I really don't think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies ("You look great." "That's a great gift." "I don't have any money with me.") or misremembered information.

In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn't make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn't really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general).

Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends.

Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn't exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated "fallaciously", are probably more likely to be true.

comment by JoshuaZ · 2010-07-13T00:31:49.360Z · LW(p) · GW(p)

But with no way to estimate how likely it is, you're blowing money on mere possibility.

There isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).

Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.

Replies from: None
comment by [deleted] · 2010-07-13T00:47:27.531Z · LW(p) · GW(p)

That's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.

Replies from: lsparrish
comment by lsparrish · 2010-07-13T01:31:02.730Z · LW(p) · GW(p)

Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?

Replies from: None
comment by [deleted] · 2010-07-13T01:34:44.585Z · LW(p) · GW(p)

Maybe $50. That's around the most I'd be willing to accept losing completely.

Replies from: lsparrish
comment by lsparrish · 2010-07-13T01:54:57.256Z · LW(p) · GW(p)

Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.

Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?

comment by Roko · 2010-07-12T21:59:30.379Z · LW(p) · GW(p)

I wonder how much money it would cost to commission the required science and marketing to get 10^5 cryopreserved people?

I welcome your guesses.

My guess ROT13'd

V guvax gung vg jbhyq pbfg nebhaq bar uhaqerq zvyyvba qbyynef bire n crevbq bs guvegl lrnef

comment by xamdam · 2010-07-09T14:58:10.985Z · LW(p) · GW(p)

Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html

comment by JoshuaZ · 2010-07-09T14:00:20.174Z · LW(p) · GW(p)

Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.

Replies from: billswift
comment by billswift · 2010-07-09T17:21:25.358Z · LW(p) · GW(p)

Somebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.

Replies from: wnoise
comment by wnoise · 2010-07-09T20:55:55.758Z · LW(p) · GW(p)

I think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.

Replies from: Blueberry
comment by Blueberry · 2010-07-09T20:58:37.411Z · LW(p) · GW(p)

As did economics.

comment by Wei Dai (Wei_Dai) · 2010-09-25T02:06:14.128Z · LW(p) · GW(p)

They could talk about it elsewhere.

My understanding is that waitingforgodel doesn't particularly want to discuss that topic, but thinks that it's important that LW's moderation policy be changed in the future for other reasons. In that case it appears to me the best way to go about it is to try to convince Eliezer using rational arguments.

A public commitment has been made.

Commitment to a particular moderation policy?

Eliezer has a bias toward secrecy.

I'm inclined to agree, but do you have an argument that he is biased (instead of us)?

In my observation Eliezer becomes irrational when it comes to dealing with risk.

I'd be interested to know what observation you're referring to.

Eliezer has (much) higher status. Status drastically hinders the ability to take on board other people's ideas when they contradict your own.

True, but I've been able to change his mind on occasion (whereas I don't think I've ever succeeded in changing Robin Hanson's for example).

The most important rational arguments that weigh into the decision involve discussing the subject matter itself. This is forbidden.

I doubt that waitingforgodel has any arguments involving the forbidden topic itself. Again, he hasn't shown any interest in that topic, but just in the general moderation policy.

Overall, I agree that Eliezer is unlikely to be persuaded, but it still seems to be a better chance than anything else waitingforgodel can do.

Replies from: wedrifid
comment by wedrifid · 2010-09-25T08:19:37.143Z · LW(p) · GW(p)

My understanding is that waitingforgodel doesn't particularly want to discuss that topic, but thinks that it's important that LW's moderation policy be changed in the future for other reasons.

See waitingforgodel's actual words on the subject. We could speculate that these "aren't his real reasons" but they certainly are sane reasons and it isn't usually useful to presume we know what people want despite what they say. At least for the purpose of good faith discussion if not for out personal judgement.

In that case it appears to me the best way to go about it is to try to convince Eliezer using rational arguments.

Waitingforgodel's general goals could be achieved without relying on LW itself but in a way that essentially nullifies the censorship influence (at least in an 'opt in' manner), even ensuring a negligible onging trivial inconvenience. This wouldn't be easy or likely for him to achieve but see below for a possible option. Assuming an outcome was achieved that ensured overt censorship created more discussion rather than less (Streisand Effect) it may actually become in Eliezer's interest to allow such discussions on LW. That would remove attention from the other location and put it back to a place where he can express a greater but still sub-censorship form of influence.

Commitment to a particular moderation policy?

More so on this specific topic than the general case. You are right that it wouldn't be violating a public commitment to not censor something unrelated.

Now, there is a distinction to be made that I consider important. Let's not pretend this is about moderation. Moderation in any remotely conventional sense would be something that applied to Eliezer's reply and not Roko's post. There hasn't been an instance of more dramatic personal abuse. The response was anything but 'moderate'. Without for the purpose of this point labelling it good or bad this is about censoring an idea. I don't think those who are most in support of the banning would put this in the same category as moderation.

I'm inclined to agree, but do you have an argument that he is biased (instead of us)?

Not right now but it is true that 'him or us' is something to consider if I was focusing on this issue. I actually typed some examples in the grandparent but removed them. I present the 'secrecy bias' as a premise which the reader would either share or not without getting distracted by disagreement with respect to underlying reasoning.

True, but I've been able to change his mind on occasion (whereas I don't think I've ever succeeded in changing Robin Hanson's for example).

This is true and I should take the time to say I'm impressed with how well Eliezer works to counter the status effect. This is something that is important to him and he handles it better than most.

I model Robin as an academic, actually using Robin's theories on how academics can be expected to behave to make reasonably good predictions about his behavior. It isn't often that saying someone has irrational biases actually constitutes a compliment. :)

Overall, I agree that Eliezer is unlikely to be persuaded, but it still seems to be a better chance than anything else waitingforgodel can do.

Absolutely. But then, who really thought he could significantly increase existential risk anyway?

It probably would be at least be possible for him to develop the skills and connections to create either a public place for discussion of anything and everything forbidden here and make it sufficiently visible that censorship actually increases visibility. He could even modify your new comments viewer such that it also displays discussion of the censored topics, probably highlighted for visibility.

Alternately, he could arrange a way to cooperate with other interested parties to collectively condition all their existential risk donation on certain minimum standards of behavior. This is again not coercive, just practical action. It makes a lot of sense to refrain from providing assistance to an organisation that censors all discussion regarding whether the AI they wish to create would torture people who didn't give them all their money. It also doesn't mean that the charitable contributors are committing to wasting the money on crack and hookers if they do not get their way. Contributions could be invested wisely and remain available to the first existential risk and opportunity organisation that meets a standard of predicted effectiveness and acceptable ethical behavior.

I doubt that waitingforgodel has any arguments involving the forbidden topic itself.

He probably doesn't. Others, including myself, do and analyzing the subject would produce more (for and against). This suggests that it would be a bad idea for wfg to try to persuade Eliezer with rational argument himself. If his belief is that other people have arguments that are worth hearing then he is best off seeking to find a way to make them heard.

People should never limit themselves to actions that are ineffective, except when the 'choke' effect is in play and you need to signal low status. That actually seems to be the main reason we would try to make him work within the LW power structure. We wouldn't, for example, tell Robin Hanson or even yourself that you should limit yourself to persuading the authority figures. We'd expect you to do what works.

I don't think waitingforgodel will do any of these things and we could say that he could not do these things given that the motivation to gain the skills required for that sort of social influence is a trait that few people have and wfg (and most people) is unlikely to be willing to engage in the personal development that leads him in that direction.

(Thankyou for the well reasoned and non-aggressive responses. I value being able to explore the practical implications of the issue. This strikes at the core of important elements of instrumental rationality.)

(A random note: I had to look up the name for the Streisand Effect from a comment made here yesterday by Kodos. I was surprised to discover just how many comments have been made since then. I didn't keep a count but it was a lot.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-25T19:46:12.238Z · LW(p) · GW(p)

That actually seems to be the main reason we would try to make him work within the LW power structure.

I don't think it was the main reason for my suggestion. I thought that threatening Eliezer with existential risk was obviously a suboptimal strategy for wfg, and looked for a better alternative to suggest to him. Rational argument was the first thing that came to mind since that's always been how I got what I wanted from Eliezer in the past.

You might be right that there are other even more effective approaches wfg could take to get what he wants, but to be honest I'm more interested in talking about Eliezer's possible biases than the details of those approaches. :)

Your larger point about not limiting ourselves to actions that are ineffective does seem like a good one. I'll have to think a bit about whether I'm personally biased in that regard.

Replies from: wedrifid
comment by wedrifid · 2010-09-26T04:37:20.008Z · LW(p) · GW(p)

I'm more interested in talking about Eliezer's possible biases than the details of those approaches. :)

I am trying to remember the reference to Eliezer's discussion of keeping science safe by limiting it to people who are able to discover it themselves. ie. Security by FOFY. I know he has created a post somewhere but don't have the link (or keyword cache). If I recall he also had Harry preach on the subject and referenced an explicit name.

I wouldn't go so far as to say the idea is useless but I also don't quite have Eliezer's faith. I also wouldn't want to reply to a straw man from my hazy recollections.

comment by SilasBarta · 2010-07-27T04:03:05.769Z · LW(p) · GW(p)

Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.

The reaction seems to be basically, "but they're wrong, why should they get to use that term?"

Huh?

Replies from: ata, JoshuaZ
comment by ata · 2010-07-27T04:21:51.789Z · LW(p) · GW(p)

Slashdot having an epic case of tribalism blinding their judgment?

I haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.

comment by JoshuaZ · 2010-07-27T04:05:34.066Z · LW(p) · GW(p)

There's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-27T04:13:45.013Z · LW(p) · GW(p)

Thanks for the reply.

Wouldn't your argument apply just the same to any inflection of a term to have "ism"?

If you and I are arguing about whether wumpuses are red, and you think they are, is it a poor portrayal to refer to you as a "reddist"? Does that imply it's an ideology, etc?

What would you suggest would be a better term for ID proponents to use?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-27T04:16:31.524Z · LW(p) · GW(p)

I presume someone who took this argument seriously would say that either a) that's its ok to use the term if they stop making ridiculous claims about ideology or b) suggest "mainstream biologists" or "evolution proponents" both of which are wordy but accurate (I don't think that even ID proponents would generally disagree with the point that they aren't the mainstream opinion among biologists.)

Replies from: SilasBarta
comment by SilasBarta · 2010-07-27T04:21:18.424Z · LW(p) · GW(p)

Do you expect that, in general, people should never use the form "X-ist", but rather, use "X proponent"? Should evolution proponents use "Intelligent Design advocate" and "creation advocate"?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-27T04:34:34.273Z · LW(p) · GW(p)

If a belief doesn't fit an ideological or religious framework, I think that X-ist and ism are often bad. I actually use the phrases "ID proponent" fairly often partially for this reason. I'm not sure however that this case is completely symmetric given that ID proponents self-identify as part of the "intelligent design movement" (a term used for example repeatedly by William Dembski and occasionally by Michael Behe.)

comment by Unknowns · 2010-07-26T10:40:51.301Z · LW(p) · GW(p)

A second post has been banned. Strange: it was on a totally different topic from Roko's.

Replies from: Eliezer_Yudkowsky, cousin_it, jimrandomh, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-26T12:02:50.999Z · LW(p) · GW(p)

Still the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.

Replies from: Roko, xamdam, NancyLebovitz
comment by Roko · 2010-07-26T13:08:29.578Z · LW(p) · GW(p)

So, this is the problem that didn't occur to me. I assumed implicitly that because such things were easy for me to brush off, the same logic would apply to others. Which is kind of silly, because I knew about one of the previous worriers from Benton House.

I think that the bottom line here is that I need to update in favor of greater general caution surrounding anything to do with the singularity, AGI, etc.

comment by xamdam · 2010-07-26T13:55:27.724Z · LW(p) · GW(p)

Was the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy).

This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.

Replies from: Apprentice, Roko
comment by Apprentice · 2010-07-26T14:35:32.058Z · LW(p) · GW(p)

If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

It is impossible to have rules without Mr. Potter exploiting them.

comment by Roko · 2010-07-26T14:42:18.294Z · LW(p) · GW(p)

I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state.

There is an upside to this, though. Timelessly speaking, there is nothing special about the moment of your death, since there are always going to be other yous elsewhere that are alive, and there will always be some continuations of any given experience moment that survive. It is very Zen.

comment by NancyLebovitz · 2010-07-26T12:38:37.133Z · LW(p) · GW(p)

Is it OCD or depression? Depression can include (is defined by?) obsessively thinking about things that make one feel worse.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-26T13:13:14.792Z · LW(p) · GW(p)

Depressive thinking generally focuses on short term issues or general failure. I'm not sure this reflects that. Frankly, it seems to come across superficially at least more like paranoia, especially of the form that one historically saw (and still sees) in some Christians worrying about hell and whether or not they are saved. The reaction to these threads is making me substantially update my estimates both for LW as a rational community and for our ability to discuss issues in a productive fashion.

comment by cousin_it · 2010-07-26T11:25:11.568Z · LW(p) · GW(p)

(comment edited)

I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.

comment by jimrandomh · 2010-07-26T12:27:29.804Z · LW(p) · GW(p)

Yep. But not unexpectedly this time; homung posted in the open thread that he was looking for 20 karma so he could post on the subject, and I sent him a private message saying he shouldn't, which he either didn't see or ignored.

comment by wedrifid · 2010-07-31T04:16:24.234Z · LW(p) · GW(p)

What was the second topic? I am most interested in knowing just what things are forbidden.

Replies from: Unknowns, ata
comment by Unknowns · 2010-07-31T05:22:49.217Z · LW(p) · GW(p)

It was about the possibility of torturing someone by creating copies of the person and torturing them.

comment by ata · 2010-07-31T04:52:27.083Z · LW(p) · GW(p)

If I'm thinking of the right post, it's another one that involved AI and torture, though from a very different angle than Roko's post. It was a dialogue between a human and a uFAI; I don't quite remember what points it was trying to make, but if we're talking about what could affect people with OCD/anxiety conditions, then it's probably just the "uFAI talking about torturing people" aspect that was deemed problematic anyway.

Replies from: wedrifid
comment by wedrifid · 2010-07-31T05:12:56.195Z · LW(p) · GW(p)

but if we're talking about what could affect people with OCD/anxiety conditions, then it's probably just the "uFAI talking about torturing people" aspect that was deemed problematic anyway.

Ahh, I remember the one. It was titled "What do you choose? 3^^^3 people being tortured for 50 years or some E. coli in the eye?" uFAIs in counterfactuals do evil things in contrived scenarios. It's what they do.

Replies from: ata
comment by ata · 2010-07-31T05:15:58.418Z · LW(p) · GW(p)

Oh, wow, I don't think I saw that one. I guess that makes three banned AI-torture posts, then?

Edit: The one I was thinking of was "Acausal torture? But a scratch, in the multiverse. (A dialogue for human and UFAI)".

comment by Will_Newsome · 2010-07-16T00:10:49.053Z · LW(p) · GW(p)

Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initiative? What questions do you wish you would have asked beforehand, and what are the answers to those questions?

Actually, any possibly relevant advice or wisdom would be appreciated. :D

comment by khafra · 2010-09-23T13:05:30.692Z · LW(p) · GW(p)

Ironically, your comment series is evidence that censorship partially succeeded in this case. Although existential risk could increase, that was not the primary reason for suppressing the idea in the post.

Replies from: timtyler, kodos96
comment by timtyler · 2010-09-23T20:14:17.579Z · LW(p) · GW(p)

Succeeded - in promoting what end?

comment by kodos96 · 2010-09-24T03:44:54.308Z · LW(p) · GW(p)

Streisand Effect

Replies from: wedrifid
comment by wedrifid · 2010-09-24T05:00:52.161Z · LW(p) · GW(p)

I've actually speculated as to whether Eliezer was going MoR:Quirrel on us. Given that aggressive censorship was obviously going to backfire a shrewd agent would not use such an approach if they wanted to actually achieve the superficially apparent goal. Whenever I see an intelligent, rational player do something that seems to be contrary to their interests I take a second look to see if I am understanding what their real motivations are. This is an absolutely vital skill when dealing with people in a corporate environment.

Could it be the case that Eliezer is passionate about wanting people to consider torture:AIs and so did whatever he could to make it seem important to people, even though it meant taking a PR hit in the process? I actually thought this question through for several minutes before feeling it was safe to dismiss the possibility.

Replies from: kodos96, lessdazed
comment by kodos96 · 2010-09-24T05:58:18.502Z · LW(p) · GW(p)

So I actually haven't read MoR - could you summarize the reference for me? I mean, I can basically see what you're saying from context, but is there anything beyond that it would be useful to know?

My instinct is that it just doesn't feel like something Eliezer would do. But what do I know?

Replies from: wedrifid
comment by wedrifid · 2010-09-24T06:11:52.869Z · LW(p) · GW(p)

So I actually haven't read MoR - could you summarize the reference for me? I mean, I can basically see what you're saying from context, but is there anything beyond that it would be useful to know?

There isn't much more to it than can be inferred from the context. MoR:Quirrel is just a clever, devious and rational manipulator.

My instinct is that it just doesn't feel like something Eliezer would do. But what do I know?

I don't either... but then that's the assumption MoR:Harry made about MoR:Dumbledore. At Quirrel's prompting Harry decided "it was time and past time to ask Draco Malfoy what the other side of that war had to say about the character of Albus Percival Wulfric Brian Dumbledore." :)

(Of course EY hasn't been in a war and I don't think there are any people who accuse him of being an especially devious political manipulator.)

comment by lessdazed · 2011-08-16T06:23:56.057Z · LW(p) · GW(p)

I had thought about it and reached no conclusion.

comment by JamesAndrix · 2010-08-06T07:40:46.934Z · LW(p) · GW(p)

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough.

Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.)

Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.

Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Replies from: cousin_it
comment by cousin_it · 2010-08-06T08:58:52.550Z · LW(p) · GW(p)

and I think everyone who heard it either outright agrees with Eliezer or thinks it wise to defer to his judgment for now.

I heard it and I don't think it's "wise" to defer to Eliezer's judgment on the matter. I stopped discussing this stuff on LW for a different reason: I feel LW is Eliezer's site and I'd better follow his requests here.

Replies from: XiXiDu, JamesAndrix
comment by XiXiDu · 2010-08-06T11:06:00.805Z · LW(p) · GW(p)

EY might have a disproportionately high influence on us and our future. In this case I believe it is appropriate not to grant him certain rights, i.e. constrain his protection from criticism. He still has the option to ban people, further explain himself or just ignore it. But just censoring something that is by definition highly important and not stating any reasons for it makes me highly suspicious. Even more so if I'm told not to pursue this issue any further in the manner of a sacred truth you are not supposed to know.

comment by JamesAndrix · 2010-08-06T09:48:31.040Z · LW(p) · GW(p)

Sorry, I didn't mean to misrepresent consensus. Will edit shortly.

To be clear: I do have a lot of complaints with how this was and is being handled.

My intent is for other to not keep digging for the reasons I did, because I now think my reasons were not sufficient, given the reasons I had not to.

comment by Morendil · 2010-07-31T15:10:34.907Z · LW(p) · GW(p)

I don't post things like this because I think they're right, I post them because I think they are interesting. The geometry of TV signals and box springs causing cancer on the left sides of people's bodies in Western countries...that's a clever bit of hypothesizing, right or wrong.

In this case, an organization I know nothing about (Vetenskap och Folkbildning from Sweden) says that Olle Johansson, one of the researchers who came up with the box spring hypothesis, is a quack. In fact, he was "Misleader of the year" in 2004. What does this mean in terms of his work on box springs and cancer? I have no idea. All I know is that on one side you've got Olle Johansson, Scientific American, and the peer-reviewed journal (Pathophysiology) in which Johansson's hypothesis was published. And on the other side, there's Vetenskap och Folkbildning, a number of commenters on the SciAm post, and a bunch of people in my inbox. Who's right? Who knows. It's a fine opportunity to remain skeptical.

-- Jason Kottke

Replies from: cupholder, NancyLebovitz
comment by cupholder · 2010-07-31T16:49:15.551Z · LW(p) · GW(p)

Who's right? Who knows. It's a fine opportunity to remain skeptical.

Bullshit. The 'skeptical' thing to do would be to take 30 seconds to think about the theory's physical plausibility before posting it on one's blog, not regurgitate the theory and cover one's ass with an I'm-so-balanced-look-there's-two-sides-to-the-issue fallacy.

TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer? It could heat you up, or it could induce currents within your body. But however much heating it causes, the temperature increase caused by heat insulation from your mattress and cover is surely much greater, and I reckon you'd get stronger induced currents from your alarm clock/computer/ceiling light/bedside lamp or whatever other circuitry's switched on in your bedroom. (And wouldn't you get a weird arrhythmia kicking off before cancer anyway?)

(As long as I'm venting, it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right,' because surely it's only interesting because it might be right? Bleh.)

Replies from: Morendil, rhollerith_dot_com
comment by Morendil · 2010-07-31T17:31:22.791Z · LW(p) · GW(p)

it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right'

Yup, that's the bit I thought made it appropriate for LW.

It reminded me of my speculations on "asymmetric intellectual warfare" - we are bombarded all day long with things that are "interesting" in one sense or another but should still be dismissed outright, if only because paying attention to all of them would leave us with nothing left over for worthwhile items.

But we can also note regularities in the patterns of which claims of this kind get raised to the level of serious consideration. I'm still perplexed by how seriously mainstream media takes claims of "electrosensitivity", but not totally surprised: there is something that seems "culturally appropriate" to the claims. The rate at which cell phones have spread through our culture has made "radio waves" more available as a potential source of worry, and has tended to legitimize a particular subset of all possible absurd claims.

comment by RHollerith (rhollerith_dot_com) · 2010-07-31T20:57:37.260Z · LW(p) · GW(p)

TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer?

Neither that nor the rest of your 'graf is a decisive argument against a causal connection. And unless you can increase my probability that there are no subsystems of the human body that can resonate at TV frequencies, I will continue in my tentative belief that TV-frequency EM might still cause a problem even if sub-kilohertz EM does not.

Very good point about, "It's interesting," though. "It's interesting," should be a good reason only to teenagers still learning to find pleasure in learning new science.

comment by NancyLebovitz · 2010-07-31T17:39:01.321Z · LW(p) · GW(p)

If breast cancer and melanomas are more likely on the left side of the body at a level that's statistically significant, that's interesting even if the proposed explanation is nonsense.

Replies from: Morendil
comment by Morendil · 2010-07-31T18:06:50.996Z · LW(p) · GW(p)

Even so, ISTM that picking through the linked article for its many flaws in reasoning would have been more interesting even than not-quite-endorsing its conclusions.

What I find interesting is the question, what motivates an influential blogger with a large audience to pass on this particular kind of factoid?

The ICCI blog has an explanation based on relevance theory and "the joy of superstition", but unfortunately (?) it involves Paul the Octopus:

We may get pleasure from having our expectations of relevance aroused. We often indulge in this pleasure for its own sake rather than for the cognitive benefits that only truly relevant information may bring. This, I would argue, is why, for instance, we read light fiction. This is why I could not resist the temptation of writing a post about Paul the octopus even before feeling confident that I had anything of relevance to say about it.

(ETA: note the parallel between the above and "I post these things because they are interesting, not because they're right". And to be lucid, my own expectations of relevance get aroused for the same reasons as most everyone else's; I just happen to be lucky enough to know a blog where I can raise the discussion to the meta level.)

comment by Armok_GoB · 2010-07-30T22:00:40.184Z · LW(p) · GW(p)

(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")

1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Now, I'm tempted to say a dust speck has no negative utility at all, but I'm not COMPLETELY certain I'm right. Let's say there's a 1/1000 000 chance I'm wrong, in which case the dust speck is -1 utilion. That means the the dust speck option is -1 10^-6 * 3^^^3, which is approximately -3^^^3.

-3^^^3 < -10^15, therefore I chose the torture.

2) The ant speck problem.

The ant speck problem is like the dust speck problem, except instead of being 3^^^3 humans that get specks in their eyes, it's 3^^^3 ordinary ants, and it's a billion humans being tortured for a millennia.

Now, I'm bigoted against ants, and pretty sure I don't value them as much as humans. In fact, with 99.9999% certain I don't value ants suffering at all. The remaining probability space is dominated by that moral value is equal to 1000^[the number of neurons in the entity's brain] for brains similar to earth type animals. Humans have about 10^11, ants have about 10^4 That means an ant is worth about 10^(-10^14) as much as a human, if it's worth anything at all.

Now lets multiply this together... -1 utilions 10^(-10^14) discount 1/10^6 that ants are worth anything at all 1/10^6 that dust specks are bad 3^^^3... That's about -3^^^3!

And for the other side: -10^15 for 50 years. Multiply that with 20, and then with the billion... about -10^25.

-3^^^3 < -10^25, therefore I chose the torture!

((*I do not actually think this, the numbers are for the sake of argument and have little to do with my actual beliefs at all.))

3) Obvious derived problems: There are variations of the ant problem, can you work out and post what if...

  • The ants will only be tortured if also all the protons in the earth decays within one second of the choice, the torture however is certain?

  • Instead of ants, you have bacteria, with behaviour as complicated as to be equivalent of 1/100 neurons?

  • The source you get the info from is unreliable, there's only a 1/googol chance the specks could actual happen, while the torture, again, is certain?

  • All of the above?

Replies from: Vladimir_Nesov, jimrandomh, Wei_Dai
comment by Vladimir_Nesov · 2010-07-30T23:18:30.641Z · LW(p) · GW(p)

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Given some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.

Replies from: Armok_GoB
comment by Armok_GoB · 2010-07-31T13:20:05.856Z · LW(p) · GW(p)

It's arbitrarily chosen from the dust speck being -1, I find it easier to imagine one second of torture than years for comparing to something that happens in less than a second. It's just an example.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-31T13:30:52.496Z · LW(p) · GW(p)

It's just an example.

The importance of an argument doesn't matter for the severity of an error in reasoning present in that argument. The error might be unimportant in itself, but that it was made in an unimportant argument doesn't argue for the unimportance of the error.

Replies from: Armok_GoB
comment by Armok_GoB · 2010-07-31T17:45:10.913Z · LW(p) · GW(p)

Oh. I misinterpreted what error you were referencing. yea, you're right I guess.

Sorry.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-31T17:51:43.403Z · LW(p) · GW(p)

And from this I can't infer whether communication succeeded or you are just making a social sound (not that it's very polite of me to remark this).

Replies from: Armok_GoB
comment by Armok_GoB · 2010-07-31T20:41:12.624Z · LW(p) · GW(p)

I first thought you had a problem with me making the number -1 000 000 from nowhere. Later I realized you meant that to some people it might not be obvious that the utility of 50 years of torture is the average utility per second time the number of seconds.

comment by jimrandomh · 2010-07-31T14:27:07.264Z · LW(p) · GW(p)

I assign ants exactly zero utility, but the wild surge objection still applies - you can't affect the universe in 3^^^3 ways without some risk of dramatic unintended results.

Replies from: Armok_GoB
comment by Armok_GoB · 2010-07-31T20:44:43.443Z · LW(p) · GW(p)

My argument is that you ALMOST certainly don't care about ants at all, but that there is some extremely small uncertainty about what your values are. The disutility of getting a dust speck in your eye also has that argument.

comment by Wei Dai (Wei_Dai) · 2010-07-30T22:50:22.514Z · LW(p) · GW(p)

You might be interested in my post Value Uncertainty and the Singleton Scenario where I suggested (based on an idea of Nick Bostrom and Toby Ord) another way of handling uncertainty about your utility function, which perhaps gives more intuitive results in these cases.

Replies from: Armok_GoB
comment by Armok_GoB · 2010-07-30T23:08:54.846Z · LW(p) · GW(p)

I consider these results perfectly intuitive, why shouldn't they be? 3^^^3 is a really big number, it makes sense you have to be really careful around it.

comment by gwern · 2010-07-29T10:19:37.075Z · LW(p) · GW(p)

Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.

I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?

Replies from: gwern, None
comment by gwern · 2010-07-30T07:18:12.031Z · LW(p) · GW(p)

I got a reply from Maciej Ceglowski today; apparently WT was taken down to free resources for another site. It's back up, for now.

(I have to say, seriously going through prediction sites is kind of discouraging. The free ones all seem to be marginal and very unpopular, while the commercial ones aren't usable in the long run and are too fragmented.)

comment by [deleted] · 2010-07-29T17:36:35.383Z · LW(p) · GW(p)

In relation to these sorts of sites, what's a normal level of success on this sort of thing for LW readers? If people chose ten things now that they thought were fifty percent likely to occur by the end of next week, would exactly five of them end up happening?

Replies from: gwern
comment by gwern · 2010-07-29T18:10:05.073Z · LW(p) · GW(p)

I don't know of any LWers who have used PB enough to really have a solid level of normal. My own PB stats are badly distorted by all my janitorial work.

I suspect not many LWers have put in the work for calibration; at least, I see very few scores posted at http://lesswrong.com/lw/1f8/test_your_calibration/

So, I couldn't say. It would be nice if we were all calibrated. (But incidentally you can be perfectly calibrated and not have 5/10 of 50% items happen; it could just be a bad week for you.)

comment by Matt_Simpson · 2010-07-29T06:06:04.827Z · LW(p) · GW(p)

UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-29T18:01:48.719Z · LW(p) · GW(p)

Yes.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-29T18:04:49.705Z · LW(p) · GW(p)

So TDT fails on counterfactual mugging, as far as you understand it to work, and the reasoning I gave here is in error?

comment by Elias_Kunnas · 2010-07-28T08:59:14.712Z · LW(p) · GW(p)

Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.

An example: In http://lesswrong.com/lw/1e/raising_the_sanity_waterline/, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-28T09:12:29.281Z · LW(p) · GW(p)

I think "waterline" here should be taken in the same context as "A rising tide floats all boats".

comment by [deleted] · 2010-07-28T01:30:57.290Z · LW(p) · GW(p)

Are there any Less Wrongers in the Grand Rapids area that might be interested in meeting up at some point?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-08-01T05:45:56.712Z · LW(p) · GW(p)

Grand Rapids, MI, you mean?

I'm in Michigan, but West Bloomfield, so a couple hours away, but still, if we found some more MI LWers, maybe.

comment by jimrandomh · 2010-07-27T05:46:05.631Z · LW(p) · GW(p)

This is my PGP public key. In the future, anything I write which seems especially important will be signed. This is more for signaling purposes than any fear of impersonation -- signing a post is a way to strongly signal its seriousness.

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (Cygwin)

mQGiBExOb4IRBAClNdK7kU0hDjEnR9KC+ga8Atu6IJ5pS9rKzPUtV9HWaYiuYldv
VDrMIFiBY1R7LKzbEVD2hc5wHdCUoBKNfNVaGXkPDFFguJ2D1LRgy0omHaxM7AB4
woFmm4drftyWaFhO8ruYZ1qSm7aebPymqGZQv/dV8tSzx8guMh4V0ree3wCgzaVX
wQcQucSLnKI3VbiyZQMAQKcEAI9aJRQoY1WFWaGDsAzCKBHtJIEooc+3+/2STL1R
0QVY/W6rBtJhSxiikBs70oVUt3+gzG2zw8HQMA+eF6ailRXyelUn6EUIm+OVPruh
3TiiNl2fVeF8CbmU08tseonPgcQXTKDXdD+/vqe2STF33Pl5h5fUfNISkho1+VFe
WplpA/wPRAHLKxBnRY42jn32s/XqTtTxii52kp0FELCe4X4Ya1tji9D7TEH4AU0A
wg5piyfrgDYTw+MvhI9KAL+NKLa4bgEe8dETZPl10TJt+zvdHqknIb92NjI8Vsb2
/gfEnT7iaJLC4eUcIExgKBaeq/TVsoelkHfN5h2y06mCKzA5CrQkSmFtZXMgQSBC
YWJjb2NrIDxqaW1AamltcmFuZG9taC5vcmc+iGAEExECACAFAkxOb4ICGwMGCwkI
BwMCBBUCCAMEFgIDAQIeAQIXgAAKCRBjbSzQYXDwqpgoAKCRNLiDtqetV8MXJm21
+GaPpkJa6QCeJ0rUccQbdGpwzmb6HDRd4lf5Uf25Ag0ETE5vghAIAIF8dtzaF81g
CywT5v8pxnyb/0cPLtv23vR98VPbmqZxbojrdltr6AOJM6FodlszBZmlCBX2bLP5
drpp9HZ/2g5O9VeWCqPbkAaZFRhMlSY6Zkq77q592+XSk9Bkb1JUWIMsEeoR6f8d
PVH986mgIFzOOZPn3L3H4v3sCnfaMvuSDZxNHh/s+vBYTTc87wrFNzv0c7WZKtum
1nildgrqK4nksMFg5+rnYdAhooSK27WERTd/WH2QNDXTyGN5HgyPKMz9tOVyEsTC
8JvVDTFjMDzfSFkQv58l6/66HgdBaEZkDbNsgL1Vw6RKDaWFTvt+bZmaLpOGDzOf
YtDZc2KkjxsAAwUH/2QV+AG0x0LR83oAESwpe1YSMDzYs1rP7OBAoTyZ8OAz+TqL
iY5v9MJmzu9XI/TXMs7kC1qCwV7tCCPExJeIQKCj9m6jiCDLPECV5bXC+AZ5t0dU
mIoBDHzJee2DOvO4gjzNM7gOMXi0drxCJGN5JRh5k7kByV9lF7yDZFWkJc7gUc9C
fQSdSnwZelxRDYMVPjnkwmNrOq0LPX27PlfVH+0YjAL1x87WTplfERd20eWk9ifA
1SBofuJlZsl1HFbY0zezgOvv6nJDANN9r/y77dbdV2DQJ2rXnTYGcpo9oA8o1/AF
AbGGgUr/0dJMMKrhpdsJZ77Mub3HRZEfEzUlFZaISQQYEQIACQUCTE5vggIbDAAK
CRBjbSzQYXDwquHMAJ43MgAxTE/2fsV+THKFJ1agjsHamACfVeL7pNlDC+fu3GbB
gZ24idwJE1o=
=PiOo
-----END PGP PUBLIC KEY BLOCK-----
Replies from: bogus, ata
comment by bogus · 2010-07-27T07:18:35.976Z · LW(p) · GW(p)

You may want to copy this key block to a user page on the LW wiki, where it can be easily referenced in the future.

Replies from: khafra
comment by khafra · 2010-07-27T17:45:22.169Z · LW(p) · GW(p)

That would also have the advantage of hopefully requiring different credentials to access, so it would be marginally harder to change the recorded public key while signing a forged post with it.

Replies from: bogus
comment by bogus · 2010-07-27T18:15:34.336Z · LW(p) · GW(p)

Not just harder; it would be all but impossible since the wiki keeps a hstory of all changes (unlike LW posts) and jimrandomh is not a wiki sysop.

comment by ata · 2010-07-27T21:29:37.421Z · LW(p) · GW(p)

signing a post is a way to strongly signal its seriousness.

Telling people what you're trying to signal is a way to make them take your signaling less seriously.

Replies from: jimrandomh, SilasBarta, Psy-Kosh
comment by jimrandomh · 2010-08-02T15:49:14.967Z · LW(p) · GW(p)

It still works as a signal, because (1) signing a comment requires some extra effort, and (2) it is harder to retract a comment that has been signed (since the signature remains valid proof of authorship even if the original comment is edited or deleted). A little bit of real cost and utility goes a long way.

comment by SilasBarta · 2010-08-01T06:02:48.736Z · LW(p) · GW(p)

But PGP's security and quality pretty much make up for that loss in signaling seriousness, don't you think?

comment by Psy-Kosh · 2010-08-01T05:48:39.089Z · LW(p) · GW(p)

Unless you're signalling that you know about signalling including acknowledging your own signalling. :)

comment by WrongBot · 2010-07-26T21:12:53.743Z · LW(p) · GW(p)

Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.

Replies from: jimrandomh
comment by jimrandomh · 2010-07-26T22:07:46.252Z · LW(p) · GW(p)

Yes. I previously believed that thinking a true statement could only be harmful by either leading to a false statement, stealing cognitive resources, or lowering confidence. I also believed that general rationality plus a few meditative tricks would be a sufficient and fully general defense against all such harms. I know better now.

comment by xamdam · 2010-07-26T09:50:33.949Z · LW(p) · GW(p)

Interesting thread on self-control over on reddit

http://www.reddit.com/r/cogsci/comments/ctliw/is_there_any_evidence_suggesting_that_practicing/

Replies from: cousin_it
comment by cousin_it · 2010-07-26T13:05:35.019Z · LW(p) · GW(p)

Yep - I'm having some fun there right now, my nick is want_to_want. Anyone knowledgeable in psych research, join in!

Replies from: xamdam
comment by xamdam · 2010-07-27T14:29:55.604Z · LW(p) · GW(p)

It is really quite sad. The issues people like Baumeister (and, say Carol Dweck) are working on are highly instrumetally important, but even after all the defensive arguments made by gagaoolala (in the reddit thread) it's clear that the studies were just not done right - a decent study should not require this much defence. Each argument, even if it's excellent by itself, is an additional assumption, and the conjunction of assumption takes away from the validity of the conclusion.

I wonder why this seems common in psychology, simply luck of training or other, more pernicious reasons?

comment by DanielVarga · 2010-07-24T19:51:29.944Z · LW(p) · GW(p)

Do you like the LW wiki page (actually, pages) on Free Will? I just wrote a post to Scott Aaronson's blog, and the post assumed an understanding of the compatibilist notion of free will. I hoped to link to the LW wiki, but when I looked at it, I decided not to, because the page is unsuitable as a quick introduction.

EDIT: Come over, it is an interesting discussion of highly LW-relevant topics. I even managed to drop the "don't confuse the map with the territory"-bomb. As a bonus, you can watch the original topic of Scott's post: His diavlog with Anthony Aguirre

Replies from: RobinZ
comment by RobinZ · 2010-07-26T15:44:44.732Z · LW(p) · GW(p)

The "Free Will" pages are fairly weak right now - I think it would be useful to rewrite at least the "Solution" page, and probably the question page as well.

comment by Alexandros · 2010-07-18T12:13:19.439Z · LW(p) · GW(p)

John Hari - My Experiment With Smart Drugs (2008)

How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?

Replies from: NancyLebovitz, arundelo
comment by NancyLebovitz · 2010-07-20T07:55:16.108Z · LW(p) · GW(p)

It surprised me that he didn't consider taking provigil one or two days a week.

It also should have surprised me (but didn't-- it just occurred to me) that he didn't consider testing the drugs' effects on his creativity.

comment by arundelo · 2010-07-18T14:37:00.661Z · LW(p) · GW(p)

There's some discussion here and here.

comment by [deleted] · 2010-07-15T17:36:53.091Z · LW(p) · GW(p)

I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.

I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.

It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I simply couldn't believe him. I know I've said some weird or embarrassing things in front of him, and so I just can't conceive of him not looking down on me.

So. Is there a way of improving my emotional response to fit the evidence better? Sometimes there is evidence that people like me (they invite me to events; they go out of their way to spend time with me; or, in the generalized professional sense, I get some forms of recognition for my work). But I find myself ignoring the good and only seeing the bad.

Replies from: None, khafra, WrongBot, NancyLebovitz, JamesPfeiffer, mattnewport, ciphergoth
comment by [deleted] · 2010-07-17T01:44:27.911Z · LW(p) · GW(p)

Update for the curious: did talk to a friend (the same one mentioned above, who, I think, is a better "shrink" than some real shrinks) and am now resolved to kick this thing, because sooner or later, excessive approval-seeking will get me in trouble.

I'm starting with what I think of as homebrew CBT: I will not gratuitously apologize or verbally belittle myself. I will try to replace "I suck, everyone hates me" thoughts with saner alternatives. I will keep doing this even when it seems stupid and self-deluding. Hopefully the concrete behavioral stuff will affect the higher-level stuff.

After all. A mathematician I really admire gave me career advice -- and it was "Believe in yourself." Yeah, in those words, and he's a logical guy, not very soft and fuzzy.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-20T18:51:09.683Z · LW(p) · GW(p)

Here's my rationalist CBT: the things that depression tells you are way too extreme to be accurate - self-deluding is believing them, not examining them rationally.

Replies from: None
comment by [deleted] · 2010-07-20T19:27:11.237Z · LW(p) · GW(p)

sounds good.

Replies from: daedalus2u
comment by daedalus2u · 2010-07-26T13:13:31.067Z · LW(p) · GW(p)

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

Replies from: daedalus2u
comment by daedalus2u · 2010-07-26T13:15:53.935Z · LW(p) · GW(p)

The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts.

I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts.

A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.

comment by khafra · 2010-07-15T17:56:36.856Z · LW(p) · GW(p)

Alicorn's Living Luminously series covers some methods of systematic mental introspection and tweaking like this. The comments on alief are especially applicable.

comment by WrongBot · 2010-07-15T18:41:55.849Z · LW(p) · GW(p)

For what it's worth, this is often known as Imposter Syndrome, though it's not any sort of real psychiatric diagnosis. Unfortunately, I'm not aware of any reliable strategies for defeating it; I have a friend who has had similar issues in a more academic context and she seems to have largely overcome the problem, but I'm not sure as to how.

comment by NancyLebovitz · 2010-07-16T09:55:21.422Z · LW(p) · GW(p)

You might want to check out Learning Methods-- they've got techniques for tracking down the thoughts behind your emotions, and then looking at whether the thoughts make sense.

comment by JamesPfeiffer · 2010-07-23T03:48:06.749Z · LW(p) · GW(p)

I was like this from ages 12-18, perhaps? It started because quite a few people actually were mean to me, but my brain incorrectly extrapolated and assumed everyone was. The beginning of the end was when I started to do something that I had defined as the province of the liked-people (in this case, dating), though it took about two years to purge the habit.

Perhaps there is something you are similarly defining to imply likedness, and you can do that thing.

comment by mattnewport · 2010-07-15T18:00:48.514Z · LW(p) · GW(p)

Perhaps it would help to think about how you treat people you like vs. people you dislike and how you react to their flaws and faults. If you have a trusted friend you can talk to about this perhaps ask them about things they've done similar to your own self-perceived flaws (weird or embarrassing things you've said) - the friend you mention sounds like a good candidate. You might find that you didn't even notice these things, don't remember them or noticed them but didn't change your opinion significantly.

If you can see the symmetry with genuinely liking certain other people despite their imperfections perhaps it will be easier to appreciate how others can genuinely like you.

comment by Paul Crowley (ciphergoth) · 2010-07-15T17:44:14.170Z · LW(p) · GW(p)

If you don't mind my asking, have you tried any kind of "talking therapy"?

Replies from: None
comment by [deleted] · 2010-07-15T17:48:57.446Z · LW(p) · GW(p)

Oh god no. I'm very old-fashioned; still think of that as a recourse for the genuinely troubled or ill, not fortunate people like me.

Replies from: ciphergoth, Kevin, erratio
comment by Paul Crowley (ciphergoth) · 2010-07-16T05:25:29.454Z · LW(p) · GW(p)
comment by Kevin · 2010-07-16T05:39:39.535Z · LW(p) · GW(p)

For what it's worth, the stigma of seeing a mental health professional has basically vanished over the last ten years. Sometimes being in therapy is even a status symbol...

A therapist isn't necessarily better than honest conversation with a good friend, but it sounds like you have trouble having that kind of conversation with your friends. Out of the different types of therapy, most of them have little evidence as to their efficacy, but there is a fair amount of evidence that cognitive behavior therapy works.

So I'll ask again -- why not try it? Being old fashioned isn't a very good reason.

comment by erratio · 2010-07-16T03:49:20.769Z · LW(p) · GW(p)

Does your sense of being unlikeable have an impact on your self-esteem or lifestyle? To paraphrase something I heard about these things, it's only a problem if you think that it's a problem

Anyway, I second the recommendation of the Luminosity sequence, also this workbook; it covers a lot of the same material as talk therapy would but you can work through them independently, without the need to impose on anyone else.

Replies from: None
comment by [deleted] · 2010-07-16T04:00:08.117Z · LW(p) · GW(p)

yeah, that's why I brought it up, it is a problem. Because I'll spend time being very unhappy that nobody "really" likes me, and sometimes do stupid things to seek approval. Thanks for the link.

comment by Paul Crowley (ciphergoth) · 2010-07-15T07:52:23.518Z · LW(p) · GW(p)

An object lesson in how not to think about the future:

http://www.futuretimeline.net/

(from Pharyngula)

Replies from: Christian_Szegedy, Alexandros
comment by Christian_Szegedy · 2010-07-26T23:33:31.473Z · LW(p) · GW(p)

2031 – Web 4.0 is transforming the Internet landscape

Could be funny, if it was a joke... :(

comment by Alexandros · 2010-07-16T10:56:42.592Z · LW(p) · GW(p)

Can you elaborate on what specifically you think they're doing wrong?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-16T11:20:01.478Z · LW(p) · GW(p)

I haven't looked in detail, but two things struck me at a glance: first, it's tremendously specific; second, the impact of technologies like brain emulation seems hugely understated.

Replies from: DanielVarga
comment by DanielVarga · 2010-07-25T06:51:33.780Z · LW(p) · GW(p)

The silly thing is that they present it as a timeline, but it is in fact an incoherent list of technological breakthroughs without really considering the interaction between them. It's like they had a nano writer, a climate writer and so on, all of them wrote a timeline, and the editors merged them in the end.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-07-26T23:39:26.465Z · LW(p) · GW(p)

He he, poor WW2 veterans miss the deadline by just one year:

2044 - The last veterans of WW2 are passing away

2045 - Humans are becoming intimately merged with machines

Replies from: byrnema
comment by byrnema · 2010-07-26T23:47:13.785Z · LW(p) · GW(p)

When I'm (physically) driving down the street, I'd like to be able to right-click on a tree I see and find out what kind of tree it is. And who planted it (e.g., federal or state funds) and when, if I want to know.

I can't wait till then.

comment by Matt_Simpson · 2010-07-13T06:06:55.566Z · LW(p) · GW(p)

I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?

Replies from: None
comment by [deleted] · 2010-07-13T08:43:21.624Z · LW(p) · GW(p)

Definitely wait. My personal favorite timing is one day for each new (substantial) post.

comment by katydee · 2010-08-06T12:10:55.826Z · LW(p) · GW(p)

Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T12:19:37.211Z · LW(p) · GW(p)

Coherent Extrapolated Volition

Long distance: An extrapolated volition your present-day self finds incomprehensible; not outrageous or annoying, but blankly incomprehensible.

This is exactly the argument religious people use to excuse any shortcomings of their personal FAI. Namely, their personal FAI knows better than you what's best for your AND everyone else.

What average people do is follow what is being taught here on LW. They decide based on their prior. Their probability estimates tell them that their FAI is likely to exist and make up excuses for extraordinary decisions based on the possible existence of it. That is, support their FAI while trying to inhibit other uFAI all in the best interest of the world at large.

Replies from: katydee, orthonormal
comment by katydee · 2010-08-06T21:50:37.569Z · LW(p) · GW(p)

The link and quotation you posted do not seem to back up your argument that the Abrahamic god follows CEV. Could you clarify?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-07T08:49:02.721Z · LW(p) · GW(p)

It's not about it following CEV but people believing it, that it acts in their best interest. Reasons are subordinate. It is the similar systematic of positive and negative incentive that I wanted to highlight.

I grew up in a family of Jehovah's Witnesses. I can assure you that all believed this to be the case.

Faith is considered the way to happiness.

Positive incentive:

“O you who believe! What is the matter with you, that, when you are asked to go forth in the Cause of Allâh, you cling heavily to the earth? Do you prefer the life of this world to the Hereafter? But little is the comfort of this life, as compared with the Hereafter.”

Whoever does right, whether male or female, and is a believer, We will make him live a good life, and We will award them their reward for the best of what they used to do. (Quran, 16:97)

Negative incentive:

"You dissipated the good things you had in your worldly life and enjoyed yourself in it. So today you are being repaid with the punishment of humiliation for being arrogant in the Earth without any right and for being deviators." (Surat al-Ahqaf: 20)

I could find heaps of arguments for Christianity that highlight the same believe of God knowing what's the best for you and the world. This is what most people on this planet believe and this is also the underpinning of the rapture of the nerds.

Replies from: katydee
comment by katydee · 2010-08-07T10:06:15.896Z · LW(p) · GW(p)

Ah, I understand-- except that I think the "negative incentive" element we're discussing is absurd, would obviously trigger failsafes with CEV as described, etc.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-07T10:46:27.793Z · LW(p) · GW(p)

There'll always be elements that suffer, that is perceive FAI as uFAI subjectively.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-07T11:19:29.064Z · LW(p) · GW(p)

The whole buzz about the removed content is basically about a kind of incompleteness theorem of FAI. Those who seek to represent every element will reach a certain critical point. You can never represent all elements, never make everyone happy. You can maximize friendliness and happiness and continue to do so, but this journey will always be incomplete.

It's more artful than this and does only affect a tiny minority though. Nothing to worry about anyway, it's very unlikely in my opinion. So unlikely that I'd deliberately spit it into the face without losing a nights sleep over it.

comment by orthonormal · 2010-08-06T21:57:49.225Z · LW(p) · GW(p)

Yahweh and the associated moral system are far from incomprehensible if you know the cultural context of the Israelites. It's a recognizably human morality, just a brutal one obsessed with purity of various sorts.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-07T09:17:32.312Z · LW(p) · GW(p)

It is not about the moral system being incomprehensible but the acts of the FAI. Whenever something bad happens religious people excuse it with an argument based on "higher intention". This is the gist of what I wanted to highlight. The similarity between religious people and those true believers into the technological singularity and AI's. This is not to say it is the same. I'm not arguing about that. I'm saying that this might draw the same kind of people committing the same kind of atrocities. This is very dangerous.

If people don't like anything happening, i.e. don't understand it, it's claimed to be a means to an end that will ultimately benefit their extrapolated volition.

People are not going to claim this in public. But I know that there are people here on LW who are disposed to extensive violence if necessary.

To be clear, I do not doubt the possibilities talked about on LW. I'm not saying they are nonsense like the old religions. What I'm on about is that the ideas the SIAI is based on, while not being nonsense, are posed to draw the same fanatic fellowship and cause the same extreme decisions.

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.

Replies from: orthonormal
comment by orthonormal · 2010-08-07T14:34:40.678Z · LW(p) · GW(p)

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet?

Yes. I would also drop a nuke on New York if it were the only way to prevent global nuclear war. These are both extremely unlikely scenarios.

It's very correct to be suspicious of claims that the stakes are that high, given that irrational memes have a habit of postulating such high stakes. However, assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-07T15:31:08.860Z · LW(p) · GW(p)

...assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.

I do not assume that. I've always been a vegetarian who's in favor of animal experiments. I'd drop a nuke to prevent less than what you described above.

comment by ata · 2010-07-27T21:30:51.733Z · LW(p) · GW(p)

Is it my imagination, or is "social construct" the sociologist version of "emergent phenomenon"?

comment by SilasBarta · 2010-07-27T20:53:02.549Z · LW(p) · GW(p)

Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.

Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)

Replies from: Rain, NancyLebovitz, xamdam
comment by Rain · 2010-07-30T17:36:25.636Z · LW(p) · GW(p)

I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.

In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.

That, and you repeat yourself. A lot.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T17:59:59.883Z · LW(p) · GW(p)

Well, thanks for explanation.

I think the "repeating myself" that you refer to is quite excusable, though, unless you have a better idea for what to do when people keep asking the same question that you've answered in the very first post.

In retrospect, I do regret a lot of those. I thought I was doing a favor by informing others about my experience with the advice that was being offered. At least everyone got a chance to show off how great they are, right?

Replies from: Rain
comment by Rain · 2010-07-30T18:09:31.565Z · LW(p) · GW(p)

I think the "repeating myself" that you refer to is quite excusable, though, unless you have a better idea for what to do

My favorite option is disengagement (stop replying), though its implementation is mood dependent. Another path I often take is a complete rework of the topic from a different viewpoint: either offer a different anecdote or a different metaphor for the same point. Explain from a different angle.

At least everyone got a chance to show off how great they are, right?

This is an example of the tone thing I was talking about.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T18:18:04.426Z · LW(p) · GW(p)

Have you ever imagined what it would be like not to understand why people react the way they do, all while others, never you, are given slack for having a worse understanding? And then to find that they're predicating their actions on misinformation that they refuse to update, no matter what you show them?

And if you have, are you proud of the way you've treated a person in that position?

Replies from: Rain, WrongBot, NancyLebovitz, Rain
comment by Rain · 2010-07-30T18:44:25.306Z · LW(p) · GW(p)

Since I consider these questions to be fairly loaded, I'm going to break them down and provide my own understanding of what they say before I give an answer. Please feel free to correct my analysis.

Have you ever imagined what it would be like not to understand why people react the way they do, all while others, never you, are given slack for having a worse understanding? And then to find that they're predicating their actions on misinformation that they refuse to update, no matter what you show them? And if you have, are you proud of the way you've treated a person in that position?

My rephrasing: Simulate a person, AM, who, when interacting with other people, is punished for their behavior. Assume that AM does not know why they are being punished. AM also witnesses other people performing substantially similar actions and not receiving the same punishment. AM then comes to the belief that these people are wrong in their application of punishment, and unwilling to admit they are wrong to punish, regardless of what actions AM takes to correct them. Now that you've simulated AM and gone through the scenario as described, take a moment to search through your memory for people who may fit the description who you, personally, interacted with. Did you treat them kindly or did you punish?

My answer: I punished. Having been in many positions where I don't understand why someone is reacting the way they are, I do not try to change them. Instead, I change myself, and my own understanding. I attempt to learn rather than teach. After more than a decade of applying this, I've become very adept at understanding their reasons. I am no longer AM. Please try to learn the reasons for why people react to you the way they do; it's the only way forward.

Here is an example: Someone tells AM that their tone is hostile. AM sees nothing wrong with their tone. However, they did not receive the response they desired, namely understanding of their points and worthwhile discussion. AM decides this is unacceptable and generates the hypothesis: "My tone is wrong." This is something that AM can change more easily than AM can change all of the people AM is attempting to converse with. AM then creates a series of tests and surveys to determine if this hypothesis is right or wrong, including changing their tone, and specifically asking people, "How so? Which part?" and other such questions. AM finds, and corrects, their problems with tone and continues to survey customers (conversation participants) on the quality of AM's product (conversation).

Here is the above example applied to this thread:

Me: I voted you down because you have a hostile tone.
You: At least everyone got a chance to show off how great they are, right?
Me: This is an example of the tone thing I was talking about.
You: How so?
Me: It's a rhetorical question designed to produce a slight snipe at the people who participated in the thread. I imagine a sneer on your face when I read it. I see no purpose for it other than to get in one last jab.
You: Oh. That's not how I meant it. I won't do that again.

Note that instead you posted even more rhetorical, loaded questions, one of which ("are you proud?") is specifically designed to create a negative emotional response, regardless of whether you understand that.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T19:14:51.488Z · LW(p) · GW(p)

... Did you treat them kindly or did you punish?

Okay, there seems to be a misunderstanding. There are more than two options here. By not recognizing this, you mistakenly conclude that:

1) Any problem I have can be fixed by an inversion (or cessation) of "whatever I was doing".

2) There is no alternative on my part to "do exactly as others order" or "try to fundamentally change them".

3) There is no alternative on your part to a simple "be nice" or "punish".

The last one is why I asked you if you were proud of what you did, and why I'm perplexed you took it as a loaded question. You'll notice that in this thread I said (something equivalent to saying that) I am not proud of something I did in the past. I don't see it as some kind of do-or-die moment. Rather, it just probes whether you adhere to a normal golden-rule-type ethic.

I would most certainly not be proud of having failed to impart my superior understanding to a member of my community who was suffering as a result of lacking that understanding. And if all I did was lurk in the shadows, vote them down long after the fact, and hope they'll just figure everything out as a result of an opaque punishment -- well, then I'd feel really bad, to the point of being sick, like I did after my behavior in the first link you gave.

If you have actual understanding of the situation I'm in, you can give more precise advice than, "whatever it was you were doing, don't". Not-X still leaves a huge portion of the space of possibilities. It leaves unresolved the question of when I should offer negative reports regarding someone's advice and several other issues. And I think if you had really been through a similar situation, you would understand why a mere "not-X" is unhelpful, as it does not identify the salient aspects of X that were bad, or what the superior alternative is. (And some of your comments suggest that the lack of superior alternative is no big deal.)

If you're trying to help, and have the empathy and understanding you claim, I think you should act like it, and I think that this is what the values you claim to hold suggest you should do as well. (Or maybe that counts as the dreaded, selectively-invoked "trying to change you"?)

Replies from: NancyLebovitz, Blueberry, Rain, whpearson, Rain, Rain, Rain, Rain
comment by NancyLebovitz · 2010-07-30T19:38:32.563Z · LW(p) · GW(p)

I'm beginning to see what you're up against.

I hope other folks will chime in if they disagree with me, but I'd say that "Are you proud of yourself?" is always an attack, and specifically a parent-to-child sort of attack at that.

If you believe it's likely that an honest answer to a question is likely to leave the person answering it feeling really bad, then the question is an attack. At the same time, I think you're telling the truth when you say you're perplexed that it was taken as a loaded question.

I've got some guesses about what's going on with you, but it's getting into pretty personal territory. Let me know if you're interested, and if so, whether you'd prefer a public post or a private message.

Replies from: komponisto, SilasBarta
comment by komponisto · 2010-07-30T19:49:07.058Z · LW(p) · GW(p)

I hope other folks will chime in if they disagree with me, but I'd say that "Are you proud of yourself?" is always an attack, and specifically a parent-to-child sort of attack at that.

Agree, but Silas's actual question was

And if you have, are you proud of the way you've treated a person in that position?

which could be an honest inquiry.

Replies from: NancyLebovitz, komponisto
comment by NancyLebovitz · 2010-07-30T23:09:01.567Z · LW(p) · GW(p)

You're right about the quote.

However, I think asking people if they're proud of what they've done when you've made it clear you don't approve of it and they haven't shown signs of pride is still an attack, though a milder one.

comment by komponisto · 2010-07-30T20:26:09.416Z · LW(p) · GW(p)

Why on Earth was the parent voted down? For goodness' sake, I hadn't seen Silas's reply when I wrote it (as you can see by comparing the times).

(I'll delete this comment as soon as both it and the parent are at 0 or more -- so no need to downvote this comment if you sympathize about the parent but don't think I should have written this.)

comment by SilasBarta · 2010-07-30T19:49:07.093Z · LW(p) · GW(p)

Okay, fair point -- I did expect that Rain would report a "not proud", though did not regard it as a sort of parent-to-child attack (how would I know?).

It's just that some people seem to be so cold and calculating that I'm left wondering if there's any empathic similarity at all -- if they get the same feelings I do on being cruel, so I have to really "fall back a rank" (as I call it), and end up posing such questions.

(Yes, I know that sounds really cheesy and self-serving too, but ...)

Replies from: komponisto, jimrandomh, whpearson
comment by komponisto · 2010-07-30T20:09:38.396Z · LW(p) · GW(p)

It's just that some people seem to be so cold and calculating that I'm left wondering if there's any empathic similarity at all

I trust you're aware that this is an ironic complaint coming from someone who attributes his social problems to an autism-spectrum disorder?

In any event, I follow discussions like this with morbid fascination, because I sympathize and empathize with both you and your interlocutors. I think you make some really good points that need to be heard, but at the same time I completely understand the criticisms of your tone and manner. On the other hand, I find myself not infrequently tempted to speak in a similar tone, and it often takes a good deal of willpower on my part to avoid doing so; but then again, I would also probably react very negatively, even to the point of bitter resentment, if someone spoke in such a tone to me.

I genuinely don't know where I stand on this, so I suppose have to incline towards the group consensus, as reflected by the voting patterns. (Which necessitates, of course, that I not vote myself, so as to avoid a sort of information cascade.)

comment by jimrandomh · 2010-07-30T20:33:34.355Z · LW(p) · GW(p)

It's just that some people seem to be so cold and calculating that I'm left wondering if there's any empathic similarity at all

I think that's a consequence of distance. It's easier to be a jerk to someone, deliberately or accidentally, when they seem like just a username on a forum; it's harder to recognize that a conversation has gone awry when it's all text; and it's harder to back down and apologize when there are third parties watching.

In online conversations, the emotional palette for most people seems to be: detached, amused, or angry. All other emotions are rare exceptions in online discourse - not because people don't feel them, but because text written by far-away people can't easily bring them out.

Looking through this thread, I see lots of comments (both by you and at you) which seem to make detachment impossible. No one's telling jokes, either, so the remaining option is anger.

AAAAAAAAGH! Rage rage hulksmash!

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T20:40:18.708Z · LW(p) · GW(p)

Well, in your case, you've actually vidchatted with me, seen the actual, breathing human on the other side, found me to be more sociable than you expected

... and still didn't feel any more inclined to clean up the misrepresentations you made of me that you were aware of :-(

Replies from: jimrandomh, whpearson
comment by jimrandomh · 2010-07-30T20:59:40.906Z · LW(p) · GW(p)

I've gone back and edited the comment in question, and I apologize for not having done so earlier. (And, while this doesn't really justify my not having edited it earlier - the reason I didn't edit it then was that I still hadn't fully understood what happened. Revisiting it now, I noticed what I missed the last time around - namely, a full enumeration of the people who could've prevented the situation from blowing up in the first place, including myself.) I'm not sure whether linking to or summarizing that thread would be a net positive or negative, so I won't, but you can if you think making my edit visible is worth the chance that it derails the current conversation here.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T21:42:06.520Z · LW(p) · GW(p)

You still haven't corrected the critical error I'm referring to. For the fifth (yes, really fifth) time: you claimed that Alicorn (reasonably) asked me not to pester her. In reality, it was the unreasonable request that I not post comments nested under hers, no matter how relevant to the discussion they are, no matter how appropriate it is that my comment go there, no matter how impersonal the remark is.

I corrected you within minutes of the error. Others who noticed did the same. I corrected you when you made it again. I pointed it out to you in skype chat. Then, after you talked to me in a vidchat, you still did nothing to correct that misinterpretation.

Now, two months later, having rung the beat-down-Silas bell as loud as you can, while you got significant karma from your noble teaching of the obvious to me, while the comment with the error remains at +20, while the numerous readers are left with the misinformation you fed them

... you still haven't corrected it. You still haven't done anything to unring that bell.

What, exactly, am I supposed to make of this? You can actually see the breathing human on the other side, see his actual feelings, have an actual conversation, and still feel good about yourself.

This, folks, is why I wonder about the empathic disconnect.

(Edit: And not that it matters to someone of your strong moral character, but the very reason I phrased the earlier comment in the way you objected to is that another equally-authoritative poster told me that would be the appropriate way to do it -- which you didn't mention then, or edit it to mention now.)

Replies from: jimrandomh
comment by jimrandomh · 2010-07-30T22:05:52.500Z · LW(p) · GW(p)

I've gone back and edited again. I'd forgotten how much weight you put on that detail; it seemed strange to me at the time, but since it was part of the original feedback loop, I guess I can see why now - you'd already had lots of time to get angry over that particular detail, and I went and rounded it off to something different.

Silas, not having corrected that really wasn't intended as a slight, although I see now that you took it that way. At the time, I interpreted it you as trying to deflect the conversation into (what I saw as) a minor detail. (And yes, you did try to tell me it wasn't minor; I brushed it off as motivated cognition, without properly considering the ramifications).

Replies from: HughRistik
comment by HughRistik · 2010-07-30T22:41:44.050Z · LW(p) · GW(p)

Silas, not having corrected that really wasn't intended as a slight, although I see now that you took it that way. At the time, I interpreted it you as trying to deflect the conversation into (what I saw as) a minor detail. (And yes, you did try to tell me it wasn't minor; I brushed it off as motivated cognition, without properly considering the ramifications).

This is an example of what I meant when I say that many intellectual issues aren't as obvious as stoplights.

comment by whpearson · 2010-07-30T20:51:37.155Z · LW(p) · GW(p)

Have you presented your thoughts on the mismatch between yourself and other people anywhere? I will happily delete my guess if it is not accurate.

I have a hard time knowing how to help if I don't know the problem.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T21:33:40.966Z · LW(p) · GW(p)

Send me a PM if you have a diagnosis. I don't think I've made general remarks on the mismatch anywhere, though I have pointed out when people, like jimrandomh, knowingly perpetuate lies about me after being repeatedly corrected.

comment by whpearson · 2010-07-30T20:17:59.292Z · LW(p) · GW(p)

I would like to help. And I may see your problem, you are very sensitive to insults from others but have a harder time seeing when you hurt others. But when you do so you are affected greatly.

I just have no idea how to help. My skin is probably overly thick. When I perceive insults, I tend to ignore them and downgrade the insulter as someone with whom I want to interact with or help, rather than kick up a social fuss.

comment by Blueberry · 2010-07-31T02:09:33.476Z · LW(p) · GW(p)

As a suggestion, the phrase "my superior understanding" will rarely be well received, even in a hypothetical.

Do you ever feel angry or irritated when you post? Are you aware of when posts (yours as well as other people's) sound rude or hostile?

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T00:23:43.482Z · LW(p) · GW(p)

Did you read the context of the phrase, or just pattern-match for the rebuke? (No offense if you have, I just have to be careful because some do that.) If you read the context, could you give some helpful detail, specific to the context?

To your 2nd para, yes and yes . Do others do the same for the comments to me? Would it matter to you, or anyone, if they didn't?

comment by Rain · 2010-07-30T19:40:05.159Z · LW(p) · GW(p)

I'm stepping away from this conversation so I don't do anything drastic (that I haven't already deleted). Suffice to say you've succeeded once again in creating an enormous, out of proportion negative emotional response in someone who was genuinely trying to be objective and help you.

I know very few people as effective as you are at generating anger and frustration with the written word.

comment by whpearson · 2010-07-30T19:32:23.670Z · LW(p) · GW(p)

Compare this

The last one is why I asked you if you were proud of what you did, and why I'm perplexed you took it as a loaded question.

With this

I would most certainly not be proud of having failed to impart my superior understanding to a member of my community who was suffering as a result of lacking that understanding.

This seems like you were going to make a negative judgement if he claimed pride. Thus the question does seem loaded. Pride is one of those thing that it is not a good thing to be fairly often.

It might well be Rain thinks that you had the same sort of problems he did(?) and was taking steps that would have helped him. But for whatever reasons they don't help you.

Also try not to take karma loss too personally. It also signals to other readers of the site what is good or bad. It is simply a number in the database.

comment by Rain · 2010-07-30T20:34:57.604Z · LW(p) · GW(p)

Do you believe you have a tone problem?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T21:44:32.599Z · LW(p) · GW(p)

Is that a rhetorical question, or do you want an answer?

Replies from: Cyan
comment by Cyan · 2010-08-01T00:33:58.525Z · LW(p) · GW(p)

I don't know about Rain, but I'd be interested to read your answer.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T00:42:40.770Z · LW(p) · GW(p)

I'm sorry, he's going to have to put up with the same delaying questions he wasted my time with if he wants an answer. If that's too much of a burden for him, well, I guess that resolves the golden rule mystery, doesn't it?

Edit: I'll PM you with the answer if you promise to keep it secret. If Rain wants an answer, he'll have to learn what it's like to be on the other side of his attitude. He's a big fan of opaque punishment learning, remember? He doesn't feel ashamed of treating others that way, or at least considers inquiries along that line to be unfairly loaded.

Seriously, guys, where was the moral indignation when Rain was being disrespectful? Or maybe he built up some "rapport" with each of you? Chatted about your families, flattery, etc.? Became part of your tribe?

Replies from: CronoDAS
comment by CronoDAS · 2010-08-01T00:51:55.027Z · LW(p) · GW(p)

You appear to be expressing disrespect. I do not find that appealing.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T00:54:48.777Z · LW(p) · GW(p)

So does Rain, but I guess that's no big deal.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-01T04:37:23.656Z · LW(p) · GW(p)

So does Rain, but I guess that's no big deal.

Maybe I'm not understanding the situation, but I perceived a lot more confusion than disrespect coming from Rain in this thread.

::pauses::

At this point I'd normally try to give some sort of advice, but, to be honest, at this point I don't know what I might say that would be helpful to you, or if you would even want me to try. The best analogy I can come up with is that of a foreigner trying to adapt to a different language and culture, such as an American trying to learn Japanese and live in Japan; us "natives" have a devil of a time trying to explain to someone what we take for granted and don't actually have explicit knowledge of.

For example, this is an explanation of "politeness levels" in the Japanese language, as seen from an outsider's perspective:

Depending on who you are speaking to, your politeness level will be very different. The correct level of politeness depends on the age of the speaker, age of the person being spoken to, time of day, astrological sign, blood type, sex, whether they are Grass or Rock Pokemon type, color of pants, and so on. For an example of Politness Levels in action, see the example below.

Japanese Teacher: Good morning, Harry.
Harry: Good Morning.
Japanese Classmates: (gasps of horror and shock)

The above would most likely be followed by violent retching. The bottom line is that Politeness Levels are completely beyond your understanding, so don't even try. Just resign yourself to talking like a little girl for the rest of your life and hope to God that no one beats you up.

source

And from what I've read about the Japanese language, this isn't much of an exaggeration. (And the part about "talking like a little girl" is pretty much dead-on.) I think I've seen "Don't try; give up" being offered as serious advice for non-Japanese people trying to understand certain aspects of Japanese culture and etiquette.

To be blunt, you often sound like an asshole when you post. (And other people, including Rain, don't.) Now, you might sound like an asshole because you're like a clueless foreigner, as you have suggested. On the other hand, you might sound like an asshole when you post because you actually are an asshole. If you're a clueless foreigner, then at least there's hope, but "teaching you how to be Japanese" would require a lot of time, effort, and willingness to work through hurt feelings on the part of you and the person who acts as your teacher, and I don't know if I actually understand the subject well enough to explain it to somebody else. But if you really are an asshole, there probably isn't much point in trying.

And there's always the chance that you should just ignore this and I should shut up, because, despite everything, you do have more karma than me.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T06:00:17.972Z · LW(p) · GW(p)

So, to summarize: you expect me to understand what makes posts come off as disrespectful or not, even as you, the person asking that I be more respectful, lack such an understanding; and you like to make gratuitous references to Japanese.

Does that about cover it?

Replies from: CronoDAS, Morendil
comment by CronoDAS · 2010-08-01T07:26:08.000Z · LW(p) · GW(p)

Well, it's pretty close. Not perfect, but close.

Most people do seem to have a functional understanding of respectful speech, even if they can't articulate it verbally. (And knowing something without being able to explain it isn't weird. Most people, when learning to speak their first language, learn how to follow rules of grammar without being able to say what those rules are. Similarly, most people can't explain how to walk, either.) So if I had not interacted with you before, I would indeed expect that you would be able to distinguish between a respectful-seeming post and a disrespectful-seeming post, and be able to reliably produce respectful-seeming posts, even if you couldn't tell me how you did it. However, you have demonstrated that you are an unusual individual who has a "broken respectfulness detector", so to speak, so I no longer expect you to refrain from making disrespectful posts. I might wish that you were more respectful, but I might as well wish for a pony as well while I'm at it.

And I don't know if "disrespectful" is even the right word. "Asshole-ish" might be more accurate. And rather than "respectful" you might want to try being "meek", "submissive", "conciliatory", "pacifistic", or "deferential": write the way people speak when addressing someone of higher status than themselves.

(Aside: I apologize if I'm being rude, but you are a native English speaker, correct? If you weren't, that would help explain a lot of things...)

If you like, I could try to help you fix your "broken respectfulness detector", but I am uncertain as to how successful I would be, and I would obviously need your cooperation. (Math, I'm pretty confident that I can teach. Social skills, not so much.) Possibly the best I could do is try to train you like a neural network: give you lots of examples of both respectful and disrespectful behavior, and hope that you come to some sort of insight. For example, have you ever seen the television show House? Dr. House is about as close to the Platonic ideal of "asshole" as a fictional character can get. If your post sounds like something that Dr. House would say, then don't make it.

Also, a workaround might be to simply have someone else look at your posts before you make them, and tell you what kind of impression it would make.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T18:48:50.649Z · LW(p) · GW(p)

I don't watch House nor much TV at all.

I'm a native English speaker, but people often do say I sound "foreign" (usually German, for some reason) and that I speak with a more "intelligent" and "upper class" tone.

I remember messaging you a lot a while back, noting your eerie similarities to me in terms of personal experience. AFAICT, the only real differences between you and me are:

  • You show more restraint.
  • I'm less proactive about my Japanophilia.
  • I didn't give up after I found myself unemployed and living with my parents (though, ashamedly, it was more out of hatred that I didn't give up than any noble kind of willpower).

As for being less asshole-ish, I do believe I can pull it off with minimal effort. The problem is that I cannot be significantly less asshole-ish, while also

1) impressing on others the importance of updating in my direction, and/or
2) posting like everyone else does, i.e., if I made my posts less asshole-ish, I would have to avoid making posts like Rain's recent ones, due to mistakenly classifying them as asshole-ish.

Regarding 1), a lot of you believe that my tone of posting is likely to do the opposite, as it turns people off from agreeing with me. While that might be true for social issues like who-"likes"-Silas, I strongly disagree that it holds on substantive issues.

I've spent a lot of my internet posting career , years ago, being "nice" in arguments, which, yes, I'm capable of. I found that, rather than turn people on to my views, it simply legitimized, in their minds, the ridiculous positions they were taking, and allowed them to confidently go away believing it was just a case of "reasonable people disagreeing about a tough issue".

In contrast, when I used my regular, "asshole-ish" tone, then yes, at the time they resisted my point with all the rationalization they could muster. But shortly afterward, they'd quietly accept it without admitting defeat, and argue in favor of it later. For example, I'm famous, under a different name, among the Linux community, for my rudeness toward a Linux forum when I ran into problems trying to switch. I made a number of criticisms of the distro, which were predictably ridiculed.

But then a few years later, most everyone believes those criticisms are valid, but if I point out how I (under that name) made them long ago, giving stark, early insight into what Linux needed to do to gain widespread acceptance in the home PC market, all they do is sling mud at me. Yet at least one Linux consultant has saved the famous thread for use in showing clients why they shouldn't deploy a Linux distro without a reliable support contract.

Or for an example here, does anyone remember Daniel_Burfoot's brilliant epiphany about how to do AI the right way, and my asshole-ish criticisms thereof? And how he stubbornly disagreed every step of the way? Well, what happened to that series? It was abandoned midway.

So therein lies the problem: do I want to change minds, or do I want to be liked? Do I want to murder my karma to point out flaws in Alicorn's advice, or do I want to be "part of the tribe"? I think you know what decision I've made, and why you haven't done the same.

Replies from: HughRistik, cousin_it, CronoDAS, NancyLebovitz, pjeby, orthonormal, xamdam, Apprentice
comment by HughRistik · 2010-08-02T03:22:36.796Z · LW(p) · GW(p)

I've spent a lot of my internet posting career , years ago, being "nice" in arguments, which, yes, I'm capable of. I found that, rather than turn people on to my views, it simply legitimized, in their minds, the ridiculous positions they were taking, and allowed them to confidently go away believing it was just a case of "reasonable people disagreeing about a tough issue".

My experience is different. I used to post online in a more critical and uncompromising fashion. Yet over the years, I came around to a more pleasant and accommodating style, and I find that it works better. Even though I have to swallow things I would like to say on the spot, I often look back on the thread later and feel glad that I took the high road.

I can't convince everyone that I debate with, but I've managed to pull a bunch of people in my direction, which I don't think I could have accomplished with a more bombastic style. Furthermore, my view is that even if I can't convince a particular person I'm debating with, I can still convince the lurking fence-sitters.

Do I want to murder my karma to point out flaws in Alicorn's advice, or do I want to be "part of the tribe"?

In a top-level post that you will remember, I criticized Alicorn's advice in a way that only gained me karma. This sort of thing can be done.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T17:10:18.528Z · LW(p) · GW(p)

In a top-level post that you will remember, I criticized Alicorn's advice in a way that only gained me karma. This sort of thing can be done.

And it took, by my reckoning, over a year after you or I said anything to her before she finally admitted she might not have accounted for the full extent of the difference between her and women in general. Your posts serve as cover for Alicorn to say what she's thinking without making the concession to me as obvious.

comment by cousin_it · 2010-08-01T20:54:22.986Z · LW(p) · GW(p)

Why do you want to change minds? Is there any chance that you could abandon that value? Because I believe, paradoxically, that it would help you achieve that same value :-)

I'm probably better than you at general social skills, but the first drafts of my comments sometimes sound disturbingly like yours. Then I notice that excessive subconscious desire to change minds interfered with my clarity of thought, and rethink/rewrite the comment. I want the "ideal commenter me" to never care who said what, who's right and who's wrong, etc. My perfect comment should make a clear, correct, context-free statement that improves the discussion, and do absolutely nothing else. I consciously try to avoid saying things like "you're wrong", saying instead "the statement you propose doesn't work because..." or even better "such-and-such idea doesn't work because...".

Ironically, people do often change their minds when talking to me about topics I understand well. But I'm not setting out to do it. Actually I have an explicit moral system, worked out from painful experience, that says it's immoral for me to try to convince anyone of anything. I try to think correct thoughts and express them clearly, and let other people make conclusions for themselves.

Replies from: Wei_Dai, wedrifid, katydee, NancyLebovitz
comment by Wei Dai (Wei_Dai) · 2010-08-01T23:02:03.195Z · LW(p) · GW(p)

Actually I have an explicit moral system, worked out from painful experience, that says it's immoral for me to try to convince anyone of anything.

Can you explain what that painful experience was? Because other people seemed to have learned from their past experience that being "cocky" led to good results instead of bad.

(I know someone else who tried to participate on Less Wrong and stopped after being frequently downvoted due to apparent overconfidence, and his explanation was very similar to Silas Barta's, i.e., his style is effective in other online forums that he participates in.)

Replies from: cousin_it, cousin_it
comment by cousin_it · 2010-08-03T06:55:45.093Z · LW(p) · GW(p)

When my job and my family self-destructed at the same time, I realized that I had no major personal successes because I'd blindly believed in others' goals and invested all my effort in them. Then I looked over my past to find occurrences where I'd made others worse off by manipulating their motivations, and found plenty of such occurrences. So I resolved to cut this thing out of my life altogether, never be the manipulator or the manipulatee. This might be an overcorrection but I feel it's served me very well in the 4 years since I adopted it. A big class of negative emotions and unproductive behaviors is just gone from my life. Other people notice it too, making compliments that I'm "unusual" and exceptionally easy to be with.

Replies from: NancyLebovitz, TheValliant, Blueberry
comment by NancyLebovitz · 2010-08-03T07:18:00.083Z · LW(p) · GW(p)

How do you identify it when others are attempting to manipulate you?

Replies from: cousin_it
comment by cousin_it · 2010-08-03T07:34:42.714Z · LW(p) · GW(p)

This sort of question is always difficult to answer... How does one identify that a shoe is a shoe? I seem to have something like "qualia" for manipulation. Someone says something to me and I recognize a familiar internal "pull": a faint feeling of guilt, and a stronger feeling of being carefully maneuvred to do some specific action to avoid the guilt, and a very strong feeling that I must not respond in any way. Then I just allow the latter feeling to win. It took a big conscious effort at first, but by now it's automatic.

Replies from: Relsqui
comment by Relsqui · 2010-10-11T19:50:27.943Z · LW(p) · GW(p)

a very strong feeling that I must not respond in any way.

Has this caused you difficulty in social situations where a certain degree of manipulation is usually considered acceptable?

I'm thinking of cases where someone is signalling that they want a hug, or a compliment, or to be asked after. Certainly it would be nice if people stated their needs clearly in those situations, but a) that's not "normal" in our culture and a lot of people never consider it, b) it's sometimes very difficult even when you know it's an option, and c) ignoring people in those situations won't lead them to be clearer next time, it just makes it seem like you don't care about their distress.

Replies from: cousin_it
comment by cousin_it · 2010-10-11T19:57:02.835Z · LW(p) · GW(p)

Signaling that you want a hug isn't manipulation in my book, it's just nonverbal communication. But I can't be guilt-tripped into a hug or a compliment.

Replies from: Relsqui
comment by Relsqui · 2010-10-11T19:59:35.905Z · LW(p) · GW(p)

Fair enough. But I'm not sure where the lines are between those things. That question might be a good addition to this post.

comment by TheValliant · 2010-10-12T01:10:04.868Z · LW(p) · GW(p)

I am very glad to hear of someone else who had a similar experience and made a similar choice. While it may be an overreaction, I think that it is not an inappropriate way to live one's life.

Replies from: cousin_it
comment by cousin_it · 2010-10-12T01:11:55.286Z · LW(p) · GW(p)

Expand? I'd be interested to hear similar stories.

Replies from: TheValliant
comment by TheValliant · 2010-10-12T04:00:23.810Z · LW(p) · GW(p)

I had been socially maladjusted, but then found that I could be charming and manipulate people rather effectively. I took advantage of this for perhaps a year, but began to feel guilty for my manipulations. I began to realized I was changing who they were without their permission and without them being able to stop me.

Once I had realized that I was for all intents and purposes emotionally violating people, I swore it off entirely. If I cannot make my point and convince someone of something through the facts (or shared consensus, for debates that aren't based on facts,) I stop.

I hope this was informative. If you have more detailed questions I would be glad to expand, but I haven't thought about this in a few years and I don't know that I summed it up completely.

comment by Blueberry · 2010-08-03T10:26:32.198Z · LW(p) · GW(p)

I'm a little confused at this: I understand not wanting to manipulate people, but why does that mean you can't (openly and honestly) try to persuade or convince someone? That doesn't seem necessarily manipulative.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2010-08-03T18:35:41.153Z · LW(p) · GW(p)

The problem is, I was/am a very manipulative person by nature, so I really need the conscious overcorrection. Whenever I detect within myself a desire to change someone's opinion, I know how much it weakens my defense against making bad arguments. It's like writing emails late at night: in the process of doing it, I like the resulting text just fine, but I know from experience on a different level that I'm going to be ashamed when I reread it in the morning.

comment by cousin_it · 2010-08-03T13:31:36.269Z · LW(p) · GW(p)

W...wait a minute, are you trying to persuade me to start persuading people? :-)

comment by cousin_it · 2010-08-02T14:51:50.395Z · LW(p) · GW(p)

Can you explain what that painful experience was?

Okay, I'll try.

At the end of 2006 I had one pretty bad week: within that week I broke up with a girlfriend I'd been with for 7 years, the company I worked for fell apart, I lost the apartment I lived in, and my grandmother died of cancer while I was in the room. Subsequently I moved into the attic of my parents' cottage, surrounded myself with books, shut the door and went offline for two months to analyze everything.

I realized my past actions must have been suboptimal because the investments of effort into my job and family went up in smoke. I realized I didn't have any personal accomplishments. I realized I had to learn to ignore the desires that other people had about my life, and reach the results I want regardless of other's attempts to judge me. After that I scanned my past for occurrences where I made other people worse off by manipulating them, found lots and lots of such occurrences, and resolved to just throw this kind of thing out of my life altogether. There were also other heuristics I found, but no one's asking about those :-)

That particular story does have a happy ending. After two months I quickly found a new apartment (best place I'd ever lived, still living there now), a new job (most interesting job I'd ever had, still working there now), and more new girlfriends than I wanted at that point. And then spring came, other crazy things started happening and I started escalating the chaos like Harry in Eliezer's fanfic.

comment by wedrifid · 2010-08-07T14:03:05.334Z · LW(p) · GW(p)

Why do you want to change minds? Is there any chance that you could abandon that value? Because I believe, paradoxically, that it would help you achieve that same value

Another belief that is worth changing is 'conversations should be fair'. Having no expectations of others beyond bounded Machiavellian interaction can allow one to guide a conversation in a far more healthy direction.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-07T15:07:14.974Z · LW(p) · GW(p)

What do you mean by 'bounded Machiavellian interaction'?

Replies from: wedrifid
comment by wedrifid · 2010-08-08T07:36:12.805Z · LW(p) · GW(p)

I'm being terse to the point of being outrright opaque but I mean that if you expect people to try maximise their own status (and power in general) in all their interactions rather than trying to be reasonable or fair then you will find conflict laden conversations less frustrating. I say 'bounded' because people aren't perfect machiavellian agents even when they try to be - you need to account for stupidity as well as political motivations.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-08T09:21:36.765Z · LW(p) · GW(p)

This reminds me of one of my heuristics-- if you claw at people, it's reasonable to expect them to claw back.

This doesn't mean "never claw at people". It just means "don't add being offended at them clawing back to the original reasons you had for clawing at them".

comment by katydee · 2010-08-06T21:56:47.618Z · LW(p) · GW(p)

I'm very interested in this system, as it matches some of my own recent moral insights. How exactly did you go about implementing it?

Replies from: cousin_it
comment by cousin_it · 2010-08-07T08:16:20.381Z · LW(p) · GW(p)

Do you have problems with detecting manipulation in yourself and others, or problems stopping it when you've detected it?

Replies from: katydee
comment by katydee · 2010-08-07T10:01:20.443Z · LW(p) · GW(p)

Depends on what you mean by manipulation. I can (obviously) easily detect falsehood in myself, and have more or less suppressed it. I can also easily detect and suppress "technical truth" answers and other methods of deception.

However, I think I need to work on detecting manipulation in others and resisting its effects. I'm pretty good at resisting flattery, but I'm sure that there are more subtle methods out there that I am unaware of and therefore susceptible to.

Replies from: cousin_it
comment by cousin_it · 2010-08-07T10:22:25.500Z · LW(p) · GW(p)

For me the biggest problem was guilt-trips, not flattery.

comment by NancyLebovitz · 2010-08-01T21:26:31.292Z · LW(p) · GW(p)

I think I used to be closer to that level of detachment, though it wasn't a matter of explicit morals so much as not being interested in modeling other people's minds. I think my current state is an improvement emotionally, but I don't know how it affects my ability to argue effectively.

However, there's at least one more reason not to be hooked on winning. I don't think I'm the only one who's more likely to be convinced if I'm getting evidence and argument that points in the same direction from more than one source. This means that various sources contributed, but no one of them was definitive in changing my mind.

comment by CronoDAS · 2010-08-01T19:52:24.574Z · LW(p) · GW(p)

So therein lies the problem: do I want to change minds, or do I want to be liked? Do I want to murder my karma to point out flaws in Alicorn's advice, or do I want to be "part of the tribe"? I think you know what decision I've made, and why you haven't done the same.

Now I understand better! You're Gilbert and Sullivan's Disagreeable Man! ;)

If you give me your attention, I will tell you what I am:
I'm a genuine philanthropist - all other kinds are sham.
Each little fault of temper and each social defect
In my erring fellow-creatures, I endeavour to correct.
To all their little weaknesses I open people's eyes,
And little plans to snub the self-sufficient I devise;
I love my fellow-creatures - I do all the good I can -
Yet everybody says I'm such a disagreeable man!
And I can't think why!

To compliments inflated I've a withering reply,
And vanity I always do my best to mortify;
A charitable action I can skilfully dissect;
And interested motives I'm delighted to detect.
I know everybody's income and what everybody earns,
And I carefully compare it with the income-tax returns;
But to benefit humanity, however much I plan,
Yet everybody says I'm such a disagreeable man!
And I can't think why!

I'm sure I'm no ascetic; I'm as pleasant as can be;
You'll always find me ready with a crushing repartee;
I've an irritating chuckle, I've a celebrated sneer,
I've an entertaining snigger, I've a fascinating leer;
To everybody's prejudice I know a thing or two;
I can tell a woman's age in half a minute - and I do -
But although I try to make myself as pleasant as I can,
Yet everybody says I'm such a disagreeable man!
And I can't think why!

comment by NancyLebovitz · 2010-08-01T20:25:41.262Z · LW(p) · GW(p)

I'd say that was a post which was convincing without being obnoxious.

You raise an interesting point. I think it's possible to be forceful and polite at the same time, but the rules for doing so are less obvious (at least to me) than the rules for being polite.

Anyone have ideas about how that combination works?

Replies from: Morendil, EStokes
comment by Morendil · 2010-08-02T12:38:31.763Z · LW(p) · GW(p)

One general rule is "be harsh on the issue and soft on the person" (from Getting to Yes).

For instance, "every single part of your post struck me as being either a factual mistake, flawed reasoning, or gratuitous allusion to an irrelevant topic" is forceful but (if actually sincere and backed up with argument) conveys no disrespect for the author. We're (almost) all human here, and so have brain farts every so often. Claiming that your interlocutor has made a mistake or a dozen is both fair and constructive.

By contrast, "so basically you like to make gratuitous references to Japanese culture" is insulting to your interlocutor, even as it leaves the issue unaddressed: you are implying (though not outright saying) that the allusion to Japanese culture was not relevant to the argument. The cooperative assumption is that your interlocutor thought otherwise, but you're implying that they brought up something irrelevant on purpose.

I can attest from personal experience that the rule works well in situations of negotiation, which definitely are about changing your mind (both yours and the interlocutor's, since if either refuses to budge, the negotiation will fail).

I doubt that being an asshole, in and of itself, ever helps.

Replies from: jimrandomh, NancyLebovitz
comment by jimrandomh · 2010-08-02T15:28:02.963Z · LW(p) · GW(p)

For instance, "every single part of your post struck me as being either a factual mistake, flawed reasoning, or gratuitous allusion to an irrelevant topic" is forceful but (if actually sincere and backed up with argument) conveys no disrespect for the author.

Agreed, but I think the respectfulness of this quote can be improved further, by replacing "your post" with "this post". It seems silly and doesn't change the semantic content at all, but de-emphasizing the connection between a post and its author by avoiding the second person serves to dampen status effects and make it easier for the other person to back down or withdraw from the conversation.

comment by NancyLebovitz · 2010-08-02T14:23:48.986Z · LW(p) · GW(p)

I've framed it as "treat everyone as though they're extremely thin-skinned egomaniacs", and at this point I'm experimenting with being a little less cautious, just for my own sanity.

However, it's true that a lot of people are very distracted by insults, and there's no point in saying that they should be tougher.

comment by EStokes · 2010-08-01T21:33:34.324Z · LW(p) · GW(p)

I think that starting off acting somewhat lower status and underplaying your confidence in the start, at least, can work. It makes it feel less like an attack on the other person and can maybe make it feel more like they're awesome rationalists for being so quick to see the evidence when it's presented (assuming, of course, that you're right.) And if you're wrong, again, it won't feel like an attack on them, and they'll be more likely to present why they're right in a way that shows your idea as an honest, easily-made mistake instead of harshly steamrolling your arguments and making you look like a dullard.

ETA: If you are right, but the person doesn't see it after your first comment, then the "awesome rationalist quickly accepting evidence" feeling can take a hit. To make them still feel that, it might be a good idea to extrapolate/present more points and apologize for not being clear. Just a quick "Ah, sorry, I wasn't clear. What I meant to say was blah blah blah" should work. Keeping deferential should be remembered. And if they're right, and you didn't understand it, hopefully caution will have prevented it from escalating any. The more of a status war it becomes/The harder it is to save face, the harder it becomes to convince the other person and get them to agree. Though there's always the chance that you convince them but they won't admit it because they'll lose face.

("You" is used as one/anyone/people in general, of course.)

comment by pjeby · 2010-08-02T22:01:45.147Z · LW(p) · GW(p)

In contrast, when I used my regular, "asshole-ish" tone, then yes, at the time they resisted my point with all the rationalization they could muster. But shortly afterward, they'd quietly accept it without admitting defeat, and argue in favor of it later.

Note that this does not automatically mean that it was you who changed their minds. I've had similar experiences to you, but I just assume that it means reality in the long run is more convincing than I am in the short run. It's really pretty narcissistic to assume that you're changing anybody's mind about anything, regardless of what voice you use. ;-)

Also, your assertion that you have only two modes of discourse (ineffectual-nice or effective-asshole) is a false dichotomy. Aside from the fact that there are more than two ways to speak, it leaves out any evaluation of who the target audience is -- which is likely to have as much or more impact on the effective/ineffective axis than whether you're nice!

comment by orthonormal · 2010-08-01T22:33:51.198Z · LW(p) · GW(p)

I second cousin_it here. If your goal is to persuade (and especially if you care about persuading third parties), then your methods may be counterproductive even if they seem more effective to you.

(For a classical example, take the dialogues of Socrates: polite and deferential to a fault, never directly convincing the antagonist, but winning points in the eyes of observers until the antagonist is too shamed to continue. I don't find this to be the ideal of Internet argument, but it would be an improvement.)

It certainly feels better to berate a fool on the Internet than to be more detached, and you might indeed generate success stories from time to time; but you may actually be less effective at swaying the bulk of opinion. I find that I'm often hesitant to even read your long comments in the first place, due to their usual tone. It's your call how to behave on Less Wrong, but it's my call whether to downvote your comments, based on whether they represent the sort of discourse I want to see here.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-08-07T13:58:25.936Z · LW(p) · GW(p)

I second cousin_it here. If your goal is to persuade (and especially if you care about persuading third parties), then your methods may be counterproductive even if they seem more effective to you.

Thirded. At times such methods give people an excuse to use even worse arguments and engage in more detrimental social-political gambits than they would otherwise have gotten away with. Observers are hesitant to intervene to penalise bullshit when to do so will affiliate them with a low status display. The dykes that maintain the sanity water level are damaged. Even when others wish to intervene on the side of reason in such cases it is extremely hard work. You have to be ten times more careful, unambiguous and polite than normal in order to achieve the same effect.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-07T15:01:21.222Z · LW(p) · GW(p)

It's more complicated than "courtesy = status". It's easiest for me to observe status online, and it seems to me that I've seen more high status people who flame occasionally than those who never do.

Replies from: wedrifid
comment by wedrifid · 2010-08-08T07:08:01.934Z · LW(p) · GW(p)

I agree, and my observations match yours.

comment by wedrifid · 2010-08-07T13:58:14.363Z · LW(p) · GW(p)

I second cousin_it here. If your goal is to persuade (and especially if you care about persuading third parties), then your methods may be counterproductive even if they seem more effective to you.

Thirded. At times such methods give people an excuse to use even worse arguments and engage in more detrimental social-political gambits than they would otherwise have gotten away with. Observers are hesitant to intervene to penalise bullshit when to do so will affiliate them with a low status display. Even when others wish to intervene on the side of reason in such cases it is extremely hard work. You have to be ten times more careful, unambiguous and polite than normal in order to achieve the same effect.

comment by xamdam · 2010-08-02T16:33:28.389Z · LW(p) · GW(p)

I didn't give up after I found myself unemployed and living with my parents

Do you notice that you ooze aggression? Just look at the sentence above. It will clearly be interpreted as "intended to hurt" by 99% of observers. Update.

A general advice on being more civil: in any post (or a list of things) that you write, erase the last item before posting. It's usually the one you sneak aggression into. I think it will do wonders to your karma.

ETA: quickly edited to remove some of my own unproductive aggression

Replies from: Blueberry, CronoDAS, SilasBarta, wedrifid
comment by Blueberry · 2010-08-02T20:43:24.825Z · LW(p) · GW(p)

Though I'd agree that Silas oozes aggression sometimes, I didn't find that sentence "intended to hurt" at all, possibly because I'd read CronoDAS's previous comments discussing this issue.

comment by CronoDAS · 2010-08-02T20:12:55.511Z · LW(p) · GW(p)

For the record, I'm in no way offended by that particular remark.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T20:20:46.447Z · LW(p) · GW(p)

If I weren't trying to improve, I wouldn't add this preface before saying something like, "No one cares. They wanted a pretense; it's not like they were somehow looking out for you."

comment by SilasBarta · 2010-08-02T17:03:55.164Z · LW(p) · GW(p)

Normally, I wouldn't have said something like that, but Crono and I have talked about our lives, and he knows where I stand on that issue and has updated accordingly. I wasn't springing anything new on him by saying that.

This doesn't, of course, change the fact that I should have accounted for how people wouldn't be aware of this and would interpret it as meaner than it really is.

Replies from: xamdam
comment by xamdam · 2010-08-02T17:27:14.508Z · LW(p) · GW(p)

Normally, I wouldn't have said something like that, but Crono and I have talked about our lives, and he knows where I stand on that issue and has updated accordingly. I wasn't springing anything new on him by saying that.

You should also be aware that what you say to a person openly in private and what you say in public evoke different levels of emotions in the same recipient, especially if their private interpretation/rationalization of the facts will not be the public default. There is special hell in some religions for embarrassing someone in public, and there is some sense to that.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T17:33:19.250Z · LW(p) · GW(p)

I've had the previous discussions in public too.

By the way, make sure to alert everyone else to the significance of public versus private criticism; they don't seem to be aware of it either, if the public criticism I've gotten here is any indication whatsoever.

Replies from: xamdam
comment by xamdam · 2010-08-02T18:36:05.551Z · LW(p) · GW(p)

public criticism != public embarassment

The former might be highly useful, if one chooses to learn from it, the latter is almost never useful.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T18:47:15.149Z · LW(p) · GW(p)

Then maybe Crono shouldn't have brought it up repeatedly?

(And the things I was referring to in my previous remarks crossed over into embarassment, despite also being criticisms.)

Replies from: xamdam
comment by xamdam · 2010-08-02T18:56:09.641Z · LW(p) · GW(p)

You have point; I was not aware of it. Still, his choice, not yours. If he brought up being fat repeatedly do you think he'll be ok with you doing the same? I think not unless he said otherwise.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T19:00:58.963Z · LW(p) · GW(p)

If he said (and this is all public), "You know, one of my vices is that I left myself get fat. But I don't really care. I actually prefer my present lifestyle; I'm quite happy this way."

And then I said, "I used to be that way, but then I decided to lose weight and succeeded."

And then he said, "Meh. Whatever works for you, I guess."

"I know some tricks..." "Not interested."

And then a year later I said, "One difference between you and me is that when I was fat, I didn't decide to stay that way."

What would you think? Because that's what happened, once you carry the transformation through.

Replies from: xamdam
comment by xamdam · 2010-08-02T19:06:16.535Z · LW(p) · GW(p)

Like I said, you have a point (in karma, too ;). I retract this line of objection, but I will talk to him IRL next time to confirm. The perception issue still stands.

comment by wedrifid · 2010-08-07T13:39:53.436Z · LW(p) · GW(p)

Do you notice that you ooze aggression? Just look at the sentence above. It will clearly be interpreted as "intended to hurt" by 99% of observers. Update.

Add me as a yet another example of an observer who doesn't share that interpretation.

Also add me as an example of an observer who considers your demand to update patronising and itself an instance of social aggression.

comment by Apprentice · 2010-08-01T23:16:33.981Z · LW(p) · GW(p)

Or for an example here, does anyone remember Daniel_Burfoot's brilliant epiphany about how to do AI the right way, and my asshole-ish criticisms thereof?

Your criticisms there were firm but not all that asshole-ish and they generated good karma for you. But apropos of that:

there are grammatical errors that children never make in any language

Have you read Pullum's 2002 article Empirical assessment of stimulus poverty arguments? That seems to me to be a crushing refutation (asshole-ish refutation?) of the usual Chomskyan Poverty of Stimulus argument. Which is not to say that a good POS argument isn't possible but I haven't seen one yet.

comment by Morendil · 2010-08-01T09:49:24.586Z · LW(p) · GW(p)

you like to make gratuitous references to Japanese

This is you being an asshole. The earlier parts of the same comment are OK.

One thing that often makes your comments come off as asshole-ish is deliberate violation on your part of the Gricean cooperative principle, in its form as an assumption about other people's communications. You interpret your own communications in the best possible light and your interlocutors' in the worst.

comment by Rain · 2010-08-02T19:47:51.102Z · LW(p) · GW(p)

I don't understand social cues. I created an internal GLUT for social interaction, the same way I learned the English language and grammar. It took me a very long time, a lot of effort, and a willingness to update rapidly and on the fly.

My primary tools are the questions, "How so? You mean ___? Which part? What do you mean? What?" (short, open-ended interjections to provide free range for expansion), a judgment of when to use them, listening to and cataloging people's statements about their social reactions (in the same way one might learn what types of food people like), and the ability to mirror the tone and body language of the person or group I'm conversing with (studying people who have expressive faces and voices helped with this).

You seem to enjoy your troll-mode statements, and seem to think it may be effective in generating conversions to truth. I believe the downvotes you receive show otherwise. I much prefer your comments-not-about-people-directly. I flinch when I think this is another "not-X" statement, but sometimes not-X is appropriate advice: when extra action is being taken (such as including specific insults against targeted individuals), removing that action can improve the situation, leaving the rest of the core intact. If someone put a tilde after every period, telling them not-tilde would be the proper thing to do.~

If you have any questions regarding my methods, I would be happy to answer them to the best of my ability, but from your response to other offers, I place a low probability on it being what you want (apparently a fully generalized understanding of human social behavior).

comment by Rain · 2010-07-30T19:38:21.783Z · LW(p) · GW(p)

The fact that you're making me incredibly angry here just demonstrates the power your tone has.

I'd like you to please stop replying to me, ever. And take as evidence that two people in a community of rationalists have asked you to do that as pretty strong evidence that you have a major problem.

comment by Rain · 2010-07-30T19:33:50.651Z · LW(p) · GW(p)

I'm done with you.

comment by WrongBot · 2010-07-30T19:27:45.019Z · LW(p) · GW(p)

I have been having this experience quite often, lately.

The solution I'm currently attempting is to stop myself from ever assigning blame, and then treating my failure to communicate/explain as a very difficult and interesting problem (which it is). There are people who consistently do much better than me at solving this problem, so my audience's failure or lack thereof is irrelevant to the possibility for improvement available to me.

I haven't completely implemented this mindset yet, but it seems to be helping so far.

Replies from: HughRistik, SilasBarta
comment by HughRistik · 2010-07-30T21:29:11.270Z · LW(p) · GW(p)

I haven't completely implemented this mindset yet, but it seems to be helping so far.

Yes. Your responses to criticism are generally measured (even though perhaps you would have a right to be a bit less measured), and usually make you look better in terms of signaling. You maintain the high ground, and don't dig yourself into holes.

The attitude you have recently displayed in your posting history in response to criticism could serve as an example to Silas of an example of what to do (rather than people just telling him what not to do). Of course, your approach isn't the only way, but it contains elements that could be useful.

comment by SilasBarta · 2010-07-30T19:29:04.816Z · LW(p) · GW(p)

Thanks for the pointer. (Be sure to tell Rain why you didn't regard the questions as loaded.)

Replies from: WrongBot
comment by WrongBot · 2010-07-30T20:37:14.109Z · LW(p) · GW(p)

I did regard them as loaded questions. That characteristic is orthogonal to their accuracy.

But even if I didn't, your parenthetical has not helped you. It was an attempt to score points, and those points will not make you stronger. That dig at Rain has not helped you communicate or explain anything other than your disdain; you are not trying to solve the problem.

comment by NancyLebovitz · 2010-07-30T19:18:04.343Z · LW(p) · GW(p)

Here's something that's only a suggestion-- I think it's worked pretty well for me, but I don't have the same emotional habits you do.

Use punishment only as a last resort. It's a very inexact tool for getting what you want, or even for avoiding what you don't want.

The fact that you're angry isn't going to make punishment into a better tool.

When other people used punishment to get what they wanted from you, it didn't work for them, did it?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T19:25:50.144Z · LW(p) · GW(p)

When other people used punishment to get what they wanted from you, it didn't work for them, did it?

Depends. Do you count honking at me when I don't realize a light has changed? That certainly "worked". I'd do the same to others, and I expect them to do the same to me.

(Btw, this is a case where some comments -- not necessarily the one I'm replying to -- might be better made as a private message.)

Replies from: HughRistik
comment by HughRistik · 2010-07-30T21:23:49.363Z · LW(p) · GW(p)

Depends. Do you count honking at me when I don't realize a light has changed? That certainly "worked". I'd do the same to others, and I expect them to do the same to me.

Many people don't see the intellectual issues to be as clear-cut as traffic lights, especially in complex discussions. If someone honks at you on the road and it isn't obvious that you are making a driving error, it's going to get your back up, right? Same thing in intellectual discussions. Even if it should be obvious to someone that they are making an intellectual error, it's not always the right move to honk at them immediately.

comment by Rain · 2010-07-30T18:20:27.208Z · LW(p) · GW(p)

Do you want answers to those questions, or are they purely rhetorical?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-30T18:23:30.764Z · LW(p) · GW(p)

Yes, I want answers.

Replies from: daedalus2u
comment by daedalus2u · 2010-07-30T18:32:32.135Z · LW(p) · GW(p)

This is how people with Asperger's or autism experience interacting with people who are neurotypically developed (for the most part).

comment by NancyLebovitz · 2010-07-28T01:23:40.932Z · LW(p) · GW(p)

This reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.

comment by xamdam · 2010-08-02T14:37:09.201Z · LW(p) · GW(p)

I see this as a feature request - would be great to have a view of your recent posts/comments that had action (karma or descendant comments). (rhetorically) If karma is meant as feedback, this would be a great way to get it.

comment by simplicio · 2010-07-26T05:01:49.990Z · LW(p) · GW(p)

So I was pondering doing a post on the etiology of sexual orientation (as a lead-in to how political/moral beliefs lead to factual ones, not vice versa).

I came across this article, which I found myself nodding along with, until I noticed the source...

Oops! Although they stress the voluntary nature of their interventions, NARTH is an organization devoted to zapping the fabulous out of gay people, using such brilliant methodology as slapping a rubber band against one's wrist every time one sees an attractive person with the wrong set of chromosomes. From the creators of the rhythm method.

Look at that article, though. And look at the site's mission statement, etc. while you're at it. The reason I posted this is because I was disturbed by how well the dark side has done here, rhetorically. And also by how they have used true facts (homosexuality is definitely not even close to 100% innate) to argue for something which is (1) morally questionable at best, given the possibility of coercion and the fact that you're fixing something that's not broken, (2) not even efficacious (no, I am not thrilled with that source).

Replies from: WrongBot
comment by WrongBot · 2010-07-26T18:03:09.855Z · LW(p) · GW(p)

For what it's worth, rubber band snapping is a pretty popular thought-stopping technique in CBT for dealing with obsessive-type behaviors, though I believe there's some debate over how effective it is. I know it's been used to address morbid jealousy, though I don't know to what extent or if more scientific studies have been conducted.

comment by Peter_Lambert-Cole · 2010-07-23T18:06:33.090Z · LW(p) · GW(p)

There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"

Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."

I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you should act, given a certain goal. In the post quoted above, that would be maximizing your money. And in order to get their point across, they cite all the obviously silly things you could otherwise do. To a certain extent, it can be more blackmail than argument, because your audience does not want to seem a fool and so he dutifully agrees that yes, it would be silly to throw your money off of Niagara Falls and he is certainly a reasonable man who would never do that so of course he agrees with you.

Now, none of the intelligent readers on LW need to be blackmailed this way because we all understand what rationality demands of us and we respond to solid arguments not rhetoric. And Eliezer is just using that bit of trickery to get a basic point across to the uninitiated.

But the argument does little to help those who already grasp the concept improve their understanding. Absurdity does not mean you have correctly implemented a "reductio ad absurdum" technique. You have to be careful because he appealed to something that is self-evidently absurd and you should be wary of anything considered self-evident. Actually, I think it is more a case of being commonly accepted as absurd, but you should be just as wary of anything commonly accepted as silly. And you should be careful about where you think it is the former but it's actually the later.

The biggest problem, however, is that silly is a class in which we put things that can be disregarded. Silly is not a truth statement. It is a value statement. It says things are unimportant, not that they are untrue. It says that according to a given standard, this thing is ranked very low, so low in fact that it is essentially worthless.

Now, disregarding things is important for thinking. It is often impossible to think through the whole problem, so we at first concern ourselves with just a part and put the troublesome cases aside for later. In the Allais Malaise post, Eliezer was concerned just with the minor problem of "How do we maximize money under these particular constraints?" and separating out intuitions was part of having a well-defined, solvable problem to discuss.

But the silliness he cites only proves that the two standards - maximizing money and satisfying your intuitions - conflict in a particular case. It tells you little about any other case or the standards themselves.

The point I most want to make is "Embrace what you find silly," but since this comment has gone on very long, so I am going to break this up into several postings.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-23T19:22:24.325Z · LW(p) · GW(p)

Yeah-- argument by silliness (I think I'd describe it as finding something about the argument which can be made to sound silly) is one of the things I don't like about normal people.

Replies from: Peter_Lambert-Cole
comment by Peter_Lambert-Cole · 2010-07-23T19:50:50.106Z · LW(p) · GW(p)

That's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.

comment by Eneasz · 2010-07-23T15:40:20.638Z · LW(p) · GW(p)

Luke Muehlhauser just posted about Friendly AI and Desirism at his blog. It tends to have a more general audience than LW, comments posted there could help spread the word. Desirism and the Singularity

comment by Alexandros · 2010-07-23T12:20:45.877Z · LW(p) · GW(p)

Desirism and the Singularity, in which one of my favourite atheist communities is inching towards singularitarian ideas.

comment by xamdam · 2010-07-22T16:17:00.250Z · LW(p) · GW(p)

Looks like Emotiv's BCI is making noticeable progress (from the Minsky demo)

http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html

but still using bold guys :)

comment by NancyLebovitz · 2010-07-17T09:38:35.497Z · LW(p) · GW(p)

Do the various versions of the Efficient Market Hypothesis only apply to investment in existing businesses?

The discussions of possible market blind spots in clothing makes me wonder how close the markets are to efficient for new businesses.

comment by ellx · 2010-07-14T10:30:16.837Z · LW(p) · GW(p)

I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers

Replies from: nhamann, gwern
comment by nhamann · 2010-07-15T19:41:39.255Z · LW(p) · GW(p)

With regards to further reading, you can look at Hawkins' most recent (that I'm aware of) paper, "Towards a Mathematical Theory of Cortical Micro-Circuits". It's fairly technical, however, so I hope your math/neuroscience background is strong (I'm not knowledgeable enough to get much out of it).

You can also take a look at Hawkins' company Numenta, particularly the Technology Overview. Hierarchical Temporal Memory is the name of Hawkins' model of the neocortex, which IIRC he believes is responsible for some of the core prediction mechanisms in the human brain.

Edit: I almost forgot, this video of a talk he presented earlier this year may be the best introduction to HTM.

comment by gwern · 2010-07-14T10:56:19.367Z · LW(p) · GW(p)

Intelligence-as-prediction/compression is a pretty familiar idea to LWers; there are a number of posts on them which you can find by searching, or you can try looking into the bibliographies and links in:

(I have no comments anent On Intelligence specifically. I remember it as being pretty vague as to specifics, and not very dense at all - unobjectionable.)

comment by Alexandros · 2010-07-14T10:06:42.378Z · LW(p) · GW(p)

I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.

Am I retreating into madness? :D

Replies from: h-H
comment by h-H · 2010-07-18T09:07:44.109Z · LW(p) · GW(p)

taking this too seriously eh,, but note it's impossible for us to exist in All branches

Replies from: Alexandros
comment by Alexandros · 2010-07-18T12:07:38.189Z · LW(p) · GW(p)

I'm not saying we exist in all branches, just that all branches are necessary, and therefore we are necessary also. Essentially I'm saying that everything that is actual is necessary.

comment by twanvl · 2010-07-13T16:35:26.029Z · LW(p) · GW(p)

When thinking about my own rationality I have to identify problems. This means that I write statements like "I wait to long with making decisions, see X,Y". Now I worry that by stating this as a fact I somehow anchor it more deeply in my mind, and make myself act more in accordance with that statement. Is there actually any evidence for that? And if so, how do I avoid this problem?

Replies from: erratio
comment by erratio · 2010-07-15T00:42:14.869Z · LW(p) · GW(p)

I don't have any references on hand but cognitive behaviour therapy definitely frowns on people describing themselves using absolute statements like that.

I would advise reframing it in a way that makes it clear that your undesirable behaviour is something that you do some of the time or that you did in the past but try not to do now, to avoid reinforcing any underlying beliefs, for example, that you are the kind of person who is bad at making timely decisions. Even better would be reframing to include some kind of resolution about how you will go about making more timely decisions in the future, even if it's just a resolution to try to be more aware of when you're putting off a decision.

comment by Blueberry · 2010-08-06T12:30:14.364Z · LW(p) · GW(p)

If an AI does what Roko suggested, it's not friendly. We don't know what, if anything, CEV will output, but I don't see any reason to think CEV would enact Roko's scenario.

Replies from: army1987, cousin_it, XiXiDu
comment by A1987dM (army1987) · 2012-03-03T22:26:01.511Z · LW(p) · GW(p)

Until about a month ago, I would have agreed, but some posts I have since read on LW made me update the probability of CEV wanting that upwards.

Replies from: Blueberry, Blueberry
comment by Blueberry · 2012-03-25T01:20:26.438Z · LW(p) · GW(p)

Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?

comment by Blueberry · 2012-03-25T00:54:50.323Z · LW(p) · GW(p)

Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?

Replies from: wedrifid, TimS
comment by wedrifid · 2012-03-25T02:40:58.044Z · LW(p) · GW(p)

Why would CEV want that?

Because 'CEV' must be instantiated on a group of agents (usually humans). Some humans are assholes. So for some value of aGroup, CEV does assholish things. Hopefully the group of all humans doesn't create a CEV that makes FAI> an outright uFAI from our perspective but we certainly shouldn't count on it.

Replies from: Blueberry
comment by Blueberry · 2012-03-25T03:18:10.095Z · LW(p) · GW(p)

Some humans are assholes. So for some value of aGroup, CEV does assholish things.

That's not necessarily true. CEV isn't precisely defined but it's intended to represent the idealized version of our desires and meta-desires. So even if we take a group of assholes, they don't necessarily want to be assholes, or want to want to be assholes, or maybe they wouldn't want to if they knew more and were smarter.

Replies from: wedrifid
comment by wedrifid · 2012-03-25T06:59:22.174Z · LW(p) · GW(p)

I refer, of course, to people whose preferences really are different to our own. Coherent Extrapolated Assholes. I don't refer to people who would really have preferences that I would consider acceptable if they just knew a bit more.

You asked for an explanation of how a correctly implemented 'CEV' could want something abhorrent. That's how.

There is an unfortunate tendency to glorify the extrapolation process and pretend that it makes any given individual or group have acceptable values. It need not.

Replies from: army1987, Vaniver
comment by A1987dM (army1987) · 2012-03-25T10:21:01.695Z · LW(p) · GW(p)

Upvoted for the phrase “Coherent Extrapolated Assholes”. Best. Insult. Ever.

Seriously, though, I don't think there are many CEAs around, anyway. (This doesn't mean there are none, either. (I was going to link to this as an example of one, but I'm not sure Hitler would have done what he did had he known about late-20th-century results about heterosis, Ashkenazi Jew intelligence, etc.)) This mean that I think it's very, very unlikely for CEV to be evil (and even less likely to be evil>), unless the membership criteria to aGroup are gerrymandered to make it so.

comment by Vaniver · 2012-03-25T07:22:45.691Z · LW(p) · GW(p)

There is an unfortunate tendency to glorify the extrapolation process and pretend that it makes any given individual or group have acceptable values. It need not.

It seemed odd to me that so few people were bothered by the claims that CEV shouldn't care much about the inputs. If you expect it to give similar results if you put in a chimpanzee and a murderer and Archimedes, then why put in anything at all instead of just printing out the only results it gives?

comment by TimS · 2012-03-25T02:58:17.652Z · LW(p) · GW(p)

If you believe in moral progress (and CEV seems to rely on that position), then there's every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention.

In short, wouldn't you change the past to prevent the occurrence of chattel slavery if you could? (If you don't like that example, substitute preventing the October revolution or whatever example fits your preferences).

Replies from: wedrifid, Blueberry, wedrifid
comment by wedrifid · 2012-03-25T07:38:49.169Z · LW(p) · GW(p)

If you believe in moral progress (and CEV seems to rely on that position)

It's more agnostic on the issue. It works just as well for the ultimate conservative.

comment by Blueberry · 2012-03-25T03:15:26.246Z · LW(p) · GW(p)

I wouldn't torture innocent people to prevent it, no.

Replies from: TimS
comment by TimS · 2012-03-25T03:42:46.045Z · LW(p) · GW(p)

Punishment from the future is spooky enough. Imagine what an anti-Guns of the South would be like for the temporal locals. Not pleasant, that's for sure.

comment by wedrifid · 2012-03-25T07:34:30.758Z · LW(p) · GW(p)

If you believe in moral progress (and CEV seems to rely on that position)

It's more agnostic on the issue. It works just as well for the ultimate conservative.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-25T16:24:45.533Z · LW(p) · GW(p)

Doesn't CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it's possible for a sufficiently intelligent system to derive M?

That sure sounds like a belief in moral progress to me.

Granted, it doesn't imply that humans left to their own devices will achieve moral progress. But the same is true of technological progress.

Replies from: wedrifid
comment by wedrifid · 2012-03-25T16:28:19.845Z · LW(p) · GW(p)

Doesn't CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it's possible for a sufficiently intelligent system to derive M?

The implicit assertion is "Greater or Equal", not "Greater".

Run on a True Conservative it will return the morals that the conservative currently has.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-25T19:01:52.626Z · LW(p) · GW(p)

Mm.
I'll certainly agree that anyone for whom that's true deserves the title "True Conservative."

I don't think I've ever met anyone who meets that description, though I've met people who would probably describe themselves that way.

Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I'm currently extrapolating the presence of my laptop, but to imply falsehoods.

comment by cousin_it · 2010-08-06T12:33:45.240Z · LW(p) · GW(p)

Roko thinks (or thought) it would. I do too. Can't argue it in detail here, sorry.

comment by XiXiDu · 2010-08-06T12:33:29.543Z · LW(p) · GW(p)

No AI is friendly. That's a naive idea. Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it's not. It will kill it. Same as it will kill all other entities who'll try to do what they want with the universe. Friendliness is subjective and cannot be guaranteed.

Replies from: pjeby, wedrifid
comment by pjeby · 2010-08-06T14:54:46.673Z · LW(p) · GW(p)

Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it's not. It will kill it.

Are you sure? Random alternative possibilities:

  • Hack it and make it friendly
  • Assimilate it
  • Externally constrain its actions
  • Toss it into another universe where humanity doesn't exist

Unless you're one yourself, it's rather difficult to predict what other options a superintelligence might come up with, that you never even considered.

comment by wedrifid · 2010-09-23T19:27:39.556Z · LW(p) · GW(p)

Yes to subjective, no to guaranteed.

Replies from: XiXiDu
comment by XiXiDu · 2010-09-24T10:32:55.635Z · LW(p) · GW(p)

How do you want to guarantee friendliness? If there are post-Singularity aliens out there then their CEV might be opposed to that of humanity, which would ultimately either mean our or their extinction. Obviously any CEV acted on by some friendly AI is a dictatorship that regards any disruptive elements, such as unfriendly AI's and aliens, as an existential risk. You might call this friendly, I don't. It's simply one way to shape the universe that is favored by the SIAI, a bunch of human beings who want to imprint the universe with an anthropocentric version of CEV. Therefore, as I said above, friendliness is subjective and cannot be guaranteed. I don't even think that it can be guaranteed subjectively, as any personal CEV would ultimately be a feedback process favoring certain constants between you and the friendly AI trying suit your preferences. If you like sex, the AI will provide you with better sex which in turn will make you like sex even more and so on. Any CEV is prone to be a paperclip maximizer seen from any position that is not regarded by the CEV. That's not friendliness, it's just a convoluted way of shaping the universe according to your will.

Replies from: wedrifid
comment by wedrifid · 2010-09-24T17:39:09.139Z · LW(p) · GW(p)

That's not friendliness, it's just a convoluted way of shaping the universe according to your will.

Yes, it's a "Friendly to me" AI that I want. (Where replacing 'me' with other individuals or groups with acceptable values would be better than nothing.) I don't necessarily want it to be friendly in the general colloquial sense. I don't particularly mind if you call it something less 'nice' sounding than Friendly.

I don't even think that it can be guaranteed subjectively, as any personal CEV would ultimately be a feedback process favoring certain constants between you and the friendly AI trying suit your preferences. If you like sex, the AI will provide you with better sex which in turn will make you like sex even more and so on.

Here we disagree on a matter of real substance. If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn't do them. This is tautological. Creating a system an guaranteeing that it works as specified is then 'just' a matter of engineering and mathematics. (Where 'just' means 'harder than anything humans have ever done'.)

Replies from: XiXiDu
comment by XiXiDu · 2010-09-25T09:33:13.371Z · LW(p) · GW(p)

If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn't do them.

I just don't see how that is possible without the AI becoming a primary attractor and therefore fundamentally altering the trajectory of your preferences. I'd favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods. I do not want to live in a universe where I'm just a puppet of the seed I once sowed. That is, I want to implement my own volition without the oversight of a caretaker God. As long as there is a being vastly superior to me that takes interest in my own matters, even the mere observer effect will alter my preferences since I'd have to take this being into account in everything conceivable.

The whole idea of friendly AI, even if it was created to suit only my personal volition, reminds me of the promises of the old religions. This horrible boring universe where nothing bad can happen to you and everything is already figured out by this one being. Sure, it wouldn't figure it out if it knew I want to do that myself. But that be pretty dumb, as it could if I wanted it to. And that's just the case with my personal friendly AI. One based on the extrapolated volition of humanity would very likely not be friendly towards me and would ultimately dictate what I can and cannot do.

Really the only favorable possibility here is to merge with the AI. But that would mean instant annihilation to me as I would add nothing to a being that vast. So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.

And I'm aware that big government and other environmental influences are altering and stearing my preferences as well. But they are much more fuzzy whereas a friendly AI is very specific. The more specific, the less free will I do have. That is, the higher the ratio of influence and effectiveness of control that I exert over the environment to the environment over me the more free I am to implement what I want to do versus what others want me to do.

Replies from: wedrifid
comment by wedrifid · 2010-09-25T09:51:17.267Z · LW(p) · GW(p)

I'd favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods.

The problem with having a pantheon of Gods... they tend to bicker. With metaphorical lightening bolts. ;)

I don't that outcome would be incompatible with a FAI (which may be necessary to do the research to get you your godlike powers). Apart from the initial enabling the FAI would provide the new 'Gods' could choose by mutual agreement to create some form of power structure that prevented them from messing each other over and burning the cosmic commons in competition.

So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.

You talked about the downside to mere observation. That would be utterly trivial and benign compared to the effects of Malthusian competition. Humans are not in a stable equilibrium now. We rely on intuitions created in a different time and different circumstances to prevent us from rapidly rushing to a miserable equilibrium of subsistence living.

The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as 'human'. Yes everything we value except existence itself. Even consciousness in the form that we experience it.

Replies from: wedrifid
comment by wedrifid · 2010-09-25T09:59:34.034Z · LW(p) · GW(p)

The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as 'human'. Yes everything we value except existence itself. Even consciousness in the form that we experience it.

I don't think I emphasised this enough. Unless the ultimate cooperation problem is solved we will devolve to something that is less human than Clippy. Clippy at least has a goal that he seeks to maximise and which motivates his quest for power. Competition would weed out even that much personality.

comment by NancyLebovitz · 2010-07-21T18:08:11.622Z · LW(p) · GW(p)

What's current thought about how you'd tell that AI is becoming more imminent?

I'm inclined to think that AI can't happen before the natural language problem is solved.

comment by Will_Newsome · 2010-07-16T04:19:27.109Z · LW(p) · GW(p)

I'm trying to think of conflicts between subsystems of the brain to see if there's anything more than a simple gerontocratic system of veto power (i.e. evolutionarily older parts of the brain override younger parts). Help?

I've got things like:

  • Wanting to eat but not wanting to spend money on food but wanting to signal wealth.
  • Wanting to breathe when underwater but wanting to surface for breath first but wanting to signal willpower to watching friends.
  • Wanting to survive but wanting to die for one's country/ideals/beliefs. (This is a counterexample to the gerontocratic hypothesis, no?)
  • Wanting to appear confident but wanting to appear modest. (Not necessarily opposed, but there is some tradeoff.)

What types of internal conflicts am I missing entirely?

Replies from: Alicorn
comment by Alicorn · 2010-07-16T04:28:47.752Z · LW(p) · GW(p)

I think you should: Eat. Refrain from breathing while underwater. Survive.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T04:32:21.687Z · LW(p) · GW(p)

Er, right, but what decision making algorithm or heuristics do you think the brain typically uses when solving problems similar to those listed?

Replies from: Alicorn
comment by Alicorn · 2010-07-16T04:37:12.691Z · LW(p) · GW(p)

Hmmm... I wish I could help, but I don't seem to have conflicts in this reference class. I don't care about signaling wealth, especially not when that actually involves parting with money; I'd only care about how long I could hold my breath if I had a bet going and I'd never make such a bet unless I was sure I could win it comfortably; I have absolutely no desire whatsoever to die for any cause; and I want to be honest more than I want to appear confident or honest to the point where if I have inclinations towards either of the latter, they might as well not exist.

comment by Will_Newsome · 2010-07-16T00:03:11.628Z · LW(p) · GW(p)

I think in dialogue. (More precisely, I think in dialogue about half the time, in more bland verbal thoughts a quarter of the time, and visually a quarter of the time, with lots of overlap. This also includes think-talking to myself when it seems internally that there are 2 people involved.)

Does anyone else find themselves thinking in dialogue often?

I think it probably has something to do with my narcissistic and often counterproductive obsession with other people's perceptions of me, but this hypothesis is the result of generalizing from one example. If it turns out cognitive styles are linked with certain personality traits to a significant degree that would be rather interesting. Have there been studies on this? Anyone have any theories?

I'm very curious about others' cognitive styles, whether not they're similar to my own.

So, LW: how do you think?

Replies from: JamesPfeiffer, erratio, None, WrongBot, red75
comment by JamesPfeiffer · 2010-07-23T03:40:53.906Z · LW(p) · GW(p)

Monologues or disjointed verbal fragments. When I am mad at someone (hasn't really happened for a few years :) ) I get into dialogues with them, usually going in circles.

comment by erratio · 2010-07-16T03:30:18.542Z · LW(p) · GW(p)

Dialogue and blog posts/essay format when I want to think about a particular topic but have no explicit goal in mind, regular verbal thoughts when I'm doing something cognitively challenging, fuzzy non-verbal and non-explicit concepts when I'm not thinking about anything in particular. Visual thought is something that I am capable of but only with conscious effort (eg. I can do those cube rotation tests just fine but I will convert everything to words if the problem allows it).

comment by [deleted] · 2010-07-16T01:30:20.679Z · LW(p) · GW(p)

Almost entirely verbal and auditory (two different things; auditory includes music and meter.) Not very visual

comment by WrongBot · 2010-07-16T01:13:11.398Z · LW(p) · GW(p)

I posted this just now, and then immediately saw this comment.

So, yes, you're not alone.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-16T01:15:24.943Z · LW(p) · GW(p)

Hahaha, I very nearly made a symmetric comment on your comment.

It seems the way you think in dialogue is a lot less self-aggrandizing and thus more useful than my way of thinking (though I've been able to change it somewhat recently).

Replies from: h-H
comment by h-H · 2010-07-18T08:56:29.406Z · LW(p) · GW(p)

for a moment I was saying no way I'm like that, but then I actually thought about and it describes me quite well, big headed-ness and all :/

comment by red75 · 2010-07-16T00:40:08.177Z · LW(p) · GW(p)

~95% - monologue, ~4.999% - nonverbal nonvisual processing, ~0.001% - dialogue. Personality is best described as ICD-10 F60.1.

comment by JoshuaZ · 2010-07-09T23:28:19.607Z · LW(p) · GW(p)

I am tentatively interpreting your remark about "not wanting to leave out those I have *plonked" as an indication that you might read comments by such individuals. Therefore, I'm am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especially if they haven't seen your earlier comments.

I think this recurrent idea of how humans are so dumb therefore we need AI is a sort of copout. Basically you're saying I am way to lazy to do the hard science so I will hope that AI will solve it. What makes it even more of a copout is that you personally are probably not even involved in the creation of AI.

The post you are replying to made no mention of AI at all. You seem to be focusing on the word "dumb" and assuming a very narrow definition. This is interesting in that in reading the remark I interpreted "dumb" as almost exactly what you think it is not talking about, that is lacking knowledge and technology. Incidentally, I'm not sure how AI would not fall in the technology category.

Christians believe in salvation through Jesus - you guys believe AI, cryonics and technology will save you.

That's an interesting claim. As one of the posters here who seems to annoy you the most, I find it interesting that I a) estimate a very tiny probability for a Singularity type event involving AI b) am not signed up cryonics (although I am considering it) and c) estimate a very small chance that technology in the next fifty years will allow indefinite extensions of life spans. While I am a sample size of one, I don't think I'm that far off from the usual LW contributor (although obviously this could be due to the standard bias of humans assuming that others are similar to them.)

Christians believe in an all powerful God - you hope to create an all powerful AI the ideal god that does what you want and allows you to be as immoral as you like

I can't speak directly for the individuals who want to create a strong very powerful singleton AI, but your claim that they wish to do to allow them to be as immoral as they like seems false. Indeed, much of the discussion about such AIs centers around taking human morality and how one would get an AI to obey general human moral and ethical norms. So how one gets that they want to be as immoral as they want is not at all clear.

I also don't understand how trying to create an AI that's powerful is the same as believing in an all powerful deity that exists independently of humans.

Christians believe in the afterlife and resurrection after death - you guys think that cryonics and computers are your tickets to resurrection and immortality.

Curiously, most of the pro-cryonics individuals here estimate low probabilities of successful cryonics I haven't see anyone here make an explicit estimate that was more than 25%. ( If anyone here does estimate a higher chance I'd be curious to hear it and see what their logic is.) I've seen multiple people here who put the estimate at <10% I'm pretty sure that very few religious individuals who believe in an afterlife would put that low a probability estimate.

I could keep going but I think my point is clear the parallels are to obvious to deny. Hence forth we shall term your beliefs Christianity v2.0.

I'm also not sure why you choose to focus so much on Christianity as the comparison religion. Many religions have aspects very similar to what you laid out in your comparison. Zoroastrianism has many elements that pre-dated Christianity, and various Jewish sects also had similar beliefs. Moreover, if any religion gets to be Christianity v2.0 it would be Islam, with possibly Mormonism or the Bahai being 3.0.

Now, it is true that many transhumanists and Singularitarians (note that these are not necessarily the same thing) do have attitudes that come across as intensely religious in form. These issues have been discussed here before (note how those comments were voted up which shows that such criticism when properly targeted and well thought out is considered worth discussing here. This provides an interesting contrast to your remarks. It is also difficult to reconcile such upvoting with your model of LW as full of fanatical transhumanist Singularitarians.)

You also seem to be again trying to score some sort of rhetorical points with name calling and labeling. I don't think that almost anyone here, either posters or readers is going to be more persuaded by your opinions if you use that term. Frankly, as someone who finds a lot of the more borderline religious aspects of transhumanism and Singularitarianism to be pretty disturbing, reading your remarks makes me feel more sympathetic to those viewpoints simply out of an emotional reaction against your poor arguments.

I know you guys think this is rationalist community but your views on such things are so warped as to render your efforts nearly fruitless in the area of rationalism

Three questions: First, What aspects of LW's regard to rationalism do you think are seriously warped? Second, do you think the community is monolithic in its attitude towards rationality? (I for example am both not an epistemological Bayesian and also think that LW frequently downplays to its own detriment the complicated history of science and scientific discoveries. But I don't think I'd label things as so warped. ) Third, if you think that LW's rationalism is so warped what do you think you are gaining by posting here?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-11T16:41:07.911Z · LW(p) · GW(p)

The problem with religious beliefs is not that they are false (they don't have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.

comment by jimrandomh · 2010-08-05T14:11:01.925Z · LW(p) · GW(p)

If you take this incident to its extreme, the important question is what people are willing to do in future based on the argument "it could increase the chance of an AI going wrong..."?

That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.

Replies from: XiXiDu, wedrifid
comment by XiXiDu · 2010-08-05T14:45:57.754Z · LW(p) · GW(p)

What was the argument then? This thread suggests my point of view.

Here one of many comments from the thread above and elsewhere indicating that the deletion was due to the risk I mentioned:

I read the article, and it struck me as dangerous. [JoshuaZ 01 August 2010 04:46:39AM]

I've just read EY' comment. It's indeed mainly about protecting people from themselves causing unfriendly AI to blackmail them. This conclusion is hard to come by since it is deleted without explanation. Still, it's basically the same argument and quite a few people on LW seem to follow the argument I described, described to start a discussion about how far we want to go.

comment by wedrifid · 2010-08-05T14:52:18.343Z · LW(p) · GW(p)

Agree in as much as I suggest Xi should revise to "decrease the chance of AI going right".

Replies from: XiXiDu
comment by XiXiDu · 2010-08-05T15:09:32.492Z · LW(p) · GW(p)

I noticed there is another deleted comment by EY where he explicitly writes:

"...the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us." [Jul 24, 2010 8:31 AM]

Replies from: wedrifid
comment by wedrifid · 2010-08-05T15:13:03.074Z · LW(p) · GW(p)

I stand corrected.

comment by WrongBot · 2010-07-31T00:50:26.716Z · LW(p) · GW(p)

A general question about decision theory:

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Apologies if this has been answered elsewhere.

Replies from: ocr-fork, katydee, ata, CronoDAS
comment by ocr-fork · 2010-07-31T00:57:05.582Z · LW(p) · GW(p)

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

Replies from: Blueberry
comment by Blueberry · 2010-07-31T02:01:01.686Z · LW(p) · GW(p)

No, it's not meaningless, because if it's true, the matrix's implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it's true, there's also the possibility of the simulation ending prematurely.

comment by katydee · 2010-07-31T01:40:17.412Z · LW(p) · GW(p)

Yes.

comment by ata · 2010-07-31T01:21:37.783Z · LW(p) · GW(p)

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Of course we have to assign non-zero probabilities to them, but I'm not quite sure how we'd figure out the right priors. Assuming that the hypotheses that your memory has been altered or you're delusional do not actually cause you to anticipate anything differently (see the bit about the blue tentacle in Technical Explanation), you may as well live in whatever reality appears to you to be the outermost one accessible to your mind.

(As for the last one, Nick Bostrom argues that we can actually assign a very high probability to a statement somewhat similar to "I live in a perfectly simulated matrix" — see the Simulation Argument. I have doubts about the meaningfulness of that on the basis of modal realism, but I'm not too confident one way or the other.)

Replies from: PaulAlmond, WrongBot
comment by PaulAlmond · 2010-08-18T02:55:54.316Z · LW(p) · GW(p)

I disagree with the idea that modal realism, whether right or not, changes the chances of any particular hypothesis like that being true. I am not saying that we can never have a rational belief about whether or not modal realism is true: There may or may not be a philosophical justification for modal realism. However, I do think that whether modal realism applies has no bearing on the probability of you being in some situation, such as in a computer simulation. I think this issue needs debating, so for that purpose I have asserted this is a rule, which I call "The Principle of Modal Realism Equivalence", and that gives us something well-defined to argue for or against. I define and assert the rule, and give a (short) justification of it here: http://www.paul-almond.com/ModalRealismEquivalence.pdf.

comment by WrongBot · 2010-07-31T05:57:12.858Z · LW(p) · GW(p)

But what if you should anticipate things very differently, if your memory has been altered? If I assigned a high probability to my memory having been altered, then I should expect that the technology exists to alter memories, and all manner of even stranger things that that would imply. Figuring out what prior to assign to a case like that, or whether it can be done at all, is what I'm struggling with.

Replies from: CronoDAS
comment by CronoDAS · 2010-07-31T00:55:05.399Z · LW(p) · GW(p)

Why not?

Replies from: WrongBot
comment by WrongBot · 2010-07-31T01:03:05.831Z · LW(p) · GW(p)

"Where'd you get your universal prior, Neo?"

Eliezer seems to think (or, at least he did at the time) that this isn't a solvable problem. To phrase the question in a way more relevant to recent discussions, are those statements in any way similar to "a halting oracle exists"?

Replies from: saturn
comment by saturn · 2010-07-31T06:35:05.698Z · LW(p) · GW(p)

Solomonoff's prior can't predict something uncomputable, but I don't see anything obviously uncomputable about any of the 3 statements you asked about.

Replies from: WrongBot
comment by WrongBot · 2010-07-31T19:02:01.268Z · LW(p) · GW(p)

Right. But can it predict computable scenarios in which it is wrong?

Replies from: saturn
comment by saturn · 2010-07-31T21:21:28.327Z · LW(p) · GW(p)

Yes. Anything that can be represented by a turing machine gets a nonzero prior. And its model of itself goes in the same turing machine with the rest of the world.

comment by daedalus2u · 2010-07-29T18:18:33.799Z · LW(p) · GW(p)

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an older (and wiser) version of the earlier entity.

But the selection of the transform that changes the first entity into the second one is arbitrary. In principle there is a transform that will change any Turing equivalent into any other Turing equivalent. Is every entity that can be instantiated as a TM equivalent to every other TM entity?

I appreciate this does not apply to entities instantiated in a biological format because such substrates are not stable over time (even a few seconds). However that does raise another problem, how can a human be “the same” entity over their lifetime?

comment by daedalus2u · 2010-07-29T18:18:15.187Z · LW(p) · GW(p)

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an older (and wiser) version of the earlier entity.

But the selection of the transform that changes the first entity into the second one is arbitrary. In principle there is a transform that will change any Turing equivalent into any other Turing equivalent. Is every entity that can be instantiated as a TM equivalent to every other TM entity?

I appreciate this does not apply to entities instantiated in a biological format because such substrates are not stable over time (even a few seconds). However that does raise another problem, how can a human be “the same” entity over their lifetime?

Replies from: WrongBot, ata
comment by WrongBot · 2010-07-29T18:44:44.979Z · LW(p) · GW(p)

Eliezer's sequence on quantum mechanics and personal identity is almost exactly what you're looking for, I think.

comment by ata · 2010-07-29T18:53:44.678Z · LW(p) · GW(p)

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time.
...
However that does raise another problem, how can a human be “the same” entity over their lifetime?

That's the kind of question that a traditional philosopher would try to answer by coming up with the Ultimate Perfect True Definition of Identity, while an LWer would probably try to dissolve it. This is actually a fairly easy problem and should make good practice — "Dissolving the Question", "Righting a Wrong Question", and "How An Algorithm Feels From the Inside" should be good places to start. The "Quantum Mechanics and Personal Identity" subsequence may also be useful if you're considering any concept of identity that involves continuity of constituent matter.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-29T19:06:44.830Z · LW(p) · GW(p)

Hold on -- those are important articles to read, and they do move you toward a resolution of that problem. But I don't think they fully dissolve/answer the exact question daedalus2u is asking.

For example, EY has written this article, grappling with but ultimately not resolving the question of whether you should care about "other copies" of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc.

I don't deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don't cover everything he asked.

Unless I've missed something?

Replies from: daedalus2u
comment by daedalus2u · 2010-07-29T22:50:01.597Z · LW(p) · GW(p)

SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can't “dissolve” the question.

If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

Replies from: mattnewport, SilasBarta
comment by mattnewport · 2010-07-29T22:56:12.176Z · LW(p) · GW(p)

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

It's no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It's also no more useful.

Replies from: daedalus2u
comment by daedalus2u · 2010-07-30T00:33:38.319Z · LW(p) · GW(p)

Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.

If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?

I don't have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is.

My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person's lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.

Replies from: mattnewport
comment by mattnewport · 2010-07-30T00:43:34.438Z · LW(p) · GW(p)

I don't see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.

Replies from: daedalus2u
comment by daedalus2u · 2010-07-30T02:49:21.668Z · LW(p) · GW(p)

I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.

Only a part of what the brain does is conscious. The visual cortex isn't conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.

There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols.

Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness?

Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia?

I suspect that the question of consciousness isn't so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.

comment by SilasBarta · 2010-07-29T23:05:06.464Z · LW(p) · GW(p)

If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).

You may be interested that I probed a similar question regarding how "qualia" come into play with this post about when two (classical) beings trade experiences.

comment by Unknowns · 2010-07-27T07:07:53.837Z · LW(p) · GW(p)

http://www.usatoday.com/news/offbeat/2010-07-13-lottery-winner-texas_N.htm?csp=obinsite

My prior for the probability of winning the lottery by fraud is high enough to settle the question: the woman discussed in the article is cheating.

Does anyone disagree with this?

Replies from: CronoDAS, Cameron_Taylor, Will_Newsome
comment by CronoDAS · 2010-07-27T08:17:24.125Z · LW(p) · GW(p)

The appropriate question to ask is:

Given the number of people who play all the different kinds of lotteries, what are the odds of there being some person who wins four (modest) jackpots?

Incidentally, three wins came from scratch-off tickets, which seem inherently less secure than the ones with a central drawing. (And you can also do something akin to card-counting with them: the odds change depending on how many tickets have already been sold and how many prizes have been claimed. Some states make this information public, so you can sometimes find tickets with a positive expected value in dollars.)

Replies from: Unknowns
comment by Unknowns · 2010-07-27T09:21:25.688Z · LW(p) · GW(p)

I admit I don't know the odds of one person winning four jackpots of over a million dollars each by pure chance. However, my guess is that they are fairly low. But maybe I'm wrong.

Regardless, one can just as easily ask "What are the odds that someone who knows how to cheat at lotteries by this time would have won four of them while cheating on at least one of them?"

Surely the answer to this is: better odds than the answer to the previous question.

There is something else involved as well. We can consider the two hypotheses: 1) she won four lotteries by pure luck; 2) she won four lotteries by cheating. The first hypothesis would predict that she will never win another lottery (like ordinary people.) The second hypothesis would predict that there is a good chance she will win another in her lifetime.

Agreeing with the second hypothesis, I predict with significant probability that she will win another. If she does, your credence in the proposition that it happened by chance must take a huge blow. In fact, would you agree that in this event, you would admit it to be more likely that she cheated?

If so, then consider what would have happened if I had raised the same issue after she had won three of them...

comment by Cameron_Taylor · 2010-07-27T08:55:23.910Z · LW(p) · GW(p)

My prior for the probability of winning the lottery by fraud

What's your secret? ;)

Replies from: Unknowns
comment by Unknowns · 2010-07-27T09:26:09.967Z · LW(p) · GW(p)

See my reply to CronoDAS regarding the possibility of a fifth lottery win.

comment by Will_Newsome · 2010-07-27T07:32:34.986Z · LW(p) · GW(p)

My prior that the universe is not sufficiently uniformly described by typical reductionist reasoning like the kind found in Eliezer's reductionism sequence is high enough that in order to make distinctions between such low probability hypotheses as the ones described I would need to be more sure that my model was meant to deal with the relationship between hypotheses and observed evidence on the extreme ends of a log odds probability scale. (I would also have to be less aware of emotionally available and biased-reasoning-causing fun-theoretic-like anthropic-like not-explicitly-reasoned-through alternative hypotheses.)

Replies from: Unknowns
comment by Unknowns · 2010-07-27T09:28:20.756Z · LW(p) · GW(p)

What are the alternative hypotheses? Magic? A simulation with interference from the simulator?

I'm not denying the possibility of alternatives, it's just that they all seem less likely the two low probability hypotheses originally considered (chance and cheating).

comment by Eneasz · 2010-07-16T05:31:15.271Z · LW(p) · GW(p)

This is a brief excerpt of a conversation I had (edited for brevity) where I laid out the basics of a generalized anti-supernaturalism principle. I had to share this because of a comment at the end that I found absolutely beautiful. It tickles all the logic circuits just right that it still makes me smile. It’s fractally brilliant, IMHO.

(italics are not-me)

So you believe there is a universe where 2 + 2 = 4 or the law of noncontradiction does not obtain? Ok, you are free to believe that. But if you are wrong, I am sure that you can see that there is an order of existence beyond nature and that therefore the supernatural exists.

If there was a universe where two and two things was not the same as four things, or a universe where something could both be something and NOT that thing, then THAT would be proof of the supernatural. That is basically what the definition of supernatural IS.

If you believe there can’t be a universe where 2 and 2 isn’t the same as 4, and you claim to believe in the supernatural, you are contradicting yourself.

can you give any explanation for why the true definition of supernatural is belief in logical contradiction?

Because anything less is simply naturalism that we don’t understand yet. That sort of god is indistinguishable from a sufficiently advanced alien life. Knowing enough about how reality works to manipulate it in ways that allow you to fly in metal transports, or communicate with someone on the other side of the planet nearly instantly, is not supernaturalism, it’s just applied naturalism. Knowing enough about reality to materialize a unicorn in a church or alter the gravitational constant in a localized area is not supernaturalism, it is just applied naturalism. Any god who is logically consistent can, with enough study, be emulated by man. He does not, in principle, have access to any aspect of reality that is beyond the reach of sufficiently advanced natural creatures.

Thus the only form of supernaturalism that isn’t reducible to applied naturalism is that of literally impossible contradiction. Which is what is generally implied by magic claims. Otherwise they wouldn’t be “magic”, just “technology”.

Do you see the irony in complaining about the logical contradiction of people who claim not to believe in the possibility of logical contradiction but also believe in the supernatural (ie the possibility of logical contradiction)?

comment by multifoliaterose · 2010-07-13T20:02:08.615Z · LW(p) · GW(p)

SiteMeter gives some statistics about number of visitors that LessWrong has, per hour/per day/per month, etc.

According to the SiteMeter FAQ, multiple views from the same IP address are considered to be the same "visit" only if the they're spaced by 30 minutes or less. It would be nice to know how many visitors LessWrong has over a given time interval, where two visits are counted to be the same if they come from the same IP address. Does anyone know how to collect this information?

comment by Bongo · 2010-07-10T21:29:11.776Z · LW(p) · GW(p)

00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical).

00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish

00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.

comment by Bongo · 2010-07-10T21:28:26.331Z · LW(p) · GW(p)

Said on #lesswrong: 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.

comment by Bongo · 2010-07-10T21:27:42.635Z · LW(p) · GW(p)

said on #lesswrong 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.

(I hope posting only a log is okay)

comment by jimrandomh · 2010-09-23T22:59:42.471Z · LW(p) · GW(p)

Yes, that is what you said you'd do. An 0.0001% existential risk is equal to 6700 murders, and that's what you said you'd do if you didn't get your way. The fact that you didn't understand what it meant doesn't make it acceptable, and when it was explained to you, you should've given an unambiguous retraction but you didn't. You are obviously bluffing, but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.

Replies from: mattnewport, wedrifid, waitingforgodel
comment by mattnewport · 2010-09-23T23:02:16.221Z · LW(p) · GW(p)

I would call the police, who would track you down and verify that you were bluffing.

And you'd probably be cited for wasting police time. This is the most ridiculous statement I've seen on here in a while.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-23T23:07:11.884Z · LW(p) · GW(p)

It was a hypothetical, in the (10^-12 probability) event that waitingforgodel provided credible evidence that he was willing and able to carry through with the threat he made. I figured that following through the logical consequences would make him realize just how ridiculous and bad what he'd said was.

comment by wedrifid · 2010-09-24T04:34:20.205Z · LW(p) · GW(p)

but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.

You appear to be confused. Wfg didn't propose to murder 6700 people. You did mathematics by which you judge wfg to be doing something as morally bad as 6700 murders. That doesn't mean he is breaking the law or doing anything that would give you the power to use the police to exercise your will upon him.

I disapprove of the parent vehemently.

comment by waitingforgodel · 2010-09-24T00:30:08.613Z · LW(p) · GW(p)

Hey Jim,

It sounds like my post rubbed you the wrong way, that wasn't my intention.

I do understand your math (world pop / a mil), did you understand mine?

Providing a credible threat reduces existential risk and saves lives... significantly more than the 6700 you cite.

Check out this article and the wikipedia article on MAD, then reread the post you're replying to and see if it makes more sense. The Wei Dai exchange might also help shed some light. If you ask questions here I'll do my best to walk you through anything you get stuck on.

I don't feel comfortable talking in too much detail here about my list. If anyone knows a good way for me to reveal one or two methods safely I'm willing.. but it's not like they're not rocket science or anything.

-wfg

(edit: fixed awkward wording in last paragraph)

Replies from: jimrandomh
comment by jimrandomh · 2010-09-24T01:43:13.903Z · LW(p) · GW(p)

I am answering this by private message.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-24T03:10:45.821Z · LW(p) · GW(p)

Aha! This has happened several times now, and waitingforgodel mentioned something in his reply which clarified what happened. He started from the link to Roko's name on the top contributors link, which produces only a vague comment about something having been deleted, without the reason, details, or link. Anyone with access to Google can track down the link, but it'll take them some time, during which they get to fume without an explanation; and it's pretty much random which part of the story they'll start out at.

I don't really object to people who really want to see it tracking down the post and comments, and I realize they certainly can't be gotten rid of, having been public on the internet for awhile and recognized as controversial. But having people encounter a vague hint at first, and having to track it down - that generates negative emotion, and puts them in an irrational state of mind that makes them want to go start a flamewar about it. It would be much better if the first thing they encountered was a truthful but nonspecific overview of what happened, rather than a tantalizing hint.

Therefore, the solution is for four people to pass 8082 karma. I am going through the archives and voting up worthy posts by contributors 8-10 (cousin_it, AnnaSalamon and Vladimir_Nesov). I will also try to pass that karma mark myself, by finishing up the collection of half-written article ideas I have lying around. (It's quite a stretch, but it's also a usable motivator for an otherwise worthy goal).

(Edited to add: I don't normally support doing funky things with the karma system, but this is important.)

Replies from: kodos96, Wei_Dai, waitingforgodel
comment by kodos96 · 2010-09-24T03:48:48.888Z · LW(p) · GW(p)

This seems like a highly suboptimal solution. It's an explicit attempt to remove Roko from the top contributors list... if you/we/EY feels that's a legitimate thing to do, well then we should just do it directly. And if it isn't a legitimate thing to do, then we shouldn't do it via gaming the karma system.

comment by Wei Dai (Wei_Dai) · 2010-09-24T20:47:51.752Z · LW(p) · GW(p)

Wouldn't the easiest solution be just to have Eliezer agree to have Roko's posts and comments restored (the ones that he voluntarily deleted)? My understanding is that Roko already agreed, and we're now just waiting on Eliezer's word. I don't see any reason why he wouldn't agree. Has anyone actually asked him directly?

comment by waitingforgodel · 2010-09-24T07:28:02.012Z · LW(p) · GW(p)

Just to be clear, I didn't learn about this via the Roko link (nor did I say in PM that I did), I used the Roko link after finding out about it on messages higher up in this thread (July 2010 open thread pt 2). Without the link I would have used the LW search bar.

No biggie, I wouldn't even mention it except that it seems to be your justification for voting weirdness.

Replies from: wedrifid
comment by wedrifid · 2010-09-24T07:46:07.568Z · LW(p) · GW(p)

Thankyou. Finding out about the issue via a link from the top posts sounded improbable so I was surprised.

This confirmation makes jimrandomh's voting scheme even more outrageous.

"People don't approve of what Eliezer did to Roko... lets hide all evidence that Roko ever existed!"

comment by XiXiDu · 2010-09-25T12:07:45.583Z · LW(p) · GW(p)

Since it is all available through the link in the parent to that Wiki (for 2 days) I now don't see any reason anymore not to post the originals:

Maybe we're now finally able to talk about the ridiculous fear associated with such fantasies. Yep, fantasies. Do I think this is possible? Yup, but if we started to worry about and censor everything this far we could as well give up preemptively.

The following is destined for our upcoming AI-God-Emperor (will make sense after reading the above links):

Dear AI,

I knew about existential risks and only gave a minimum amount of my financial and other resources to prevent them. I however expect substantial reward for this, if not, I piss on your matrix.

Stay cool, Alex

P.S. I'm a masochist, thanks in advance.

comment by cousin_it · 2010-08-06T12:15:25.171Z · LW(p) · GW(p)

Not the same reason at all, sorry.

Let's stop here.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T12:26:25.543Z · LW(p) · GW(p)

I'd love if you'd send me a short PM explaining how I'm wrong. Thanks.

comment by waitingforgodel · 2010-09-23T05:39:57.209Z · LW(p) · GW(p)

I'll also say that censorship is a "hot button" issue for me, to the point that I'm not sure I want to continue helping SIAI. They went from nerdy-but-fun-to-talk-to/help to scary-cult-like-weirdos as soon as I read the article, and thought about what EYs reaction, and Roko's removal meant.

I'm seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.

(Deleting this post, or the one I'm replying to, would count),

Replies from: Roko, rhollerith_dot_com, Emile, Wei_Dai, Alicorn
comment by Roko · 2010-12-09T21:40:52.899Z · LW(p) · GW(p)

Can I just state categorically that this "ways to increase existential risks" thing is stupid and completely over the top.

We should be able to discuss things, sometimes even continuing discussion in private. We should not stoop to playing silly games of brinksmanship.

Geez, if we can't eve avoid existential brinksmanship on the goddamn LW forum when the technology is hypothetical and the stakes are low, what hope in hell do we have when real politicians get wind of real progress in AGI?

comment by RHollerith (rhollerith_dot_com) · 2010-09-23T08:57:06.456Z · LW(p) · GW(p)

No one asked or forced Roko to leave Less Wrong. Wounded by Eliezer's public reprimand, Roko deleted all his comments and said that he is leaving.

(I for one wish he would come back. He was a valuable contributor.)

Replies from: Roko
comment by Roko · 2010-12-09T21:41:29.273Z · LW(p) · GW(p)

Correct. I was not asked to leave.

comment by Emile · 2010-09-23T10:09:04.451Z · LW(p) · GW(p)

How much experience do you have with various online communities?

I've found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.

I think "freedom of speech" has different connotations, and different consequences in online communities, compared to the real world. Anonimity makes a big difference, as does the possibility of leaving and joining another community, or the fact that "real-life consequences" are much smaller.

I'm seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.

(Deleting this post, or the one I'm replying to, would count),

I'm not sure I understand what you mean here - are you saying that you are willing to try to increase existential risk by %0.0001 if someone deletes your post ???

If so, you're a fucking despicable dick. But I may have misunderstood you.

Replies from: wedrifid, waitingforgodel
comment by wedrifid · 2010-09-23T16:40:50.411Z · LW(p) · GW(p)

I've found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.

That particular incident was not one in which Eliezer came across as sane (or stable). I don't believe moderation itself is the subject of wfg's criticism.

Replies from: Emile
comment by Emile · 2010-09-24T19:31:25.561Z · LW(p) · GW(p)

That ... may be true. I'm not very interested in putting Eliezer on trial, its the kind of petty politics I try to avoid. He seems to be doing a pretty good job of teaching interesting and useful stuff, setting up a functional community, and defending unusual ideas while not looking like a total loon. I don't think he needs "help" from any back seat drivers.

The impression I got of the whole Roko fiasco was that Eliezer was more concerned with avoiding nightmares in people he cared about than with the repercussions of Roko's post on existential risk. But I didn't dig into it very much - as I said, I'm not very interesting in he said / she said bickering. So I may be wrong in my impressions.

comment by waitingforgodel · 2010-09-23T10:32:24.457Z · LW(p) · GW(p)

Hey Emile,

Please check out my other comments on this thread before replying, as it sounds like my reasoning isn't fully clear to you.

Re: policing an online community I agree that there a lot of options to consider about how LW should be run, and that if people don't like EY deleting their posts they're free to try and set up their own LW in parallel. I don't think it would be a good thing, or something we should encourage, but I agree it's an option.

I also agree that some policing can help prevent a negative community from developing -- that's one reason I was glad to see that LW went with the reddit platform. It's great at policing. I think it's a big part of what makes LW is so successful.

That said, I also think that users should try other options rather than simply giving up on LW if they don't like what's going on. That's what I'm doing here.

Re: 0.0001% You didn't misunderstand me about the whole post deletion thing. To my mind 0.0001% isn't that much compared to what the post deletion means about the future of LW. All this cloak-and-dagger silliness hurts the community. I'm doing my part to avoid further damage.

No one is going to delete it (I think? :p), so it doesn't really matter either way.

-wfg

Replies from: topynate, Emile
comment by topynate · 2010-09-23T15:14:40.873Z · LW(p) · GW(p)

scary-cult-like-weirdos

To my mind 0.0001% isn't that much compared to what the post deletion means about the future of LW.

You're threatening to on average kill at least 6000 people, in order to get the moderation policy you prefer. You're also not completely insensitive to how people appear to others. Would you like to reconsider how you've been going about achieving your aims?

comment by Emile · 2010-09-23T13:44:10.280Z · LW(p) · GW(p)

I find it hard to relate to the way of thinking of someone who's willing to increase the chances that humanity goes extinct if someone deletes his post from a forum on the internet.

Please go find another community to "help" with this kind of blackmail.

Replies from: kodos96, waitingforgodel
comment by kodos96 · 2010-09-24T01:02:01.515Z · LW(p) · GW(p)

If I understand him correctly, what he's trying to do is to precommit to doing something which increases ER, iff EY does something that he (wfg) believes will increase ER by a greater amount. Now he may or may not be correct in that belief, but it seems clear that his motivation is to decrease net ER by disincentivizing something he views as increasing ER.

Replies from: waitingforgodel
comment by waitingforgodel · 2010-09-24T02:41:20.852Z · LW(p) · GW(p)

Right. Thanks for this post. People keep responding with knee-jerk reactions to the implementation rather than thought out ones to the idea :-/

Not that I can blame them, this seems to be an emotional topic for all of us.

comment by waitingforgodel · 2010-09-23T14:01:28.890Z · LW(p) · GW(p)

Fair enough, go check out this article (and the wikipedia article on MAD) and see if it doesn't make a bit more sense.

comment by Wei Dai (Wei_Dai) · 2010-09-23T07:26:17.783Z · LW(p) · GW(p)

I don't understand why you're so upset about LW posts being deleted, to the extent of being willing to increase existential risks just to prevent that from happening.

The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)

Replies from: wedrifid
comment by wedrifid · 2010-09-23T19:19:37.363Z · LW(p) · GW(p)

The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)

One reason would be if you believe that the act of suppressing a significant point of discussion of possible actions of an FAI matters rather a lot. "Don't talk about the possibility of my 'friendly' AI torturing people" isn't something that ought to engender confidence in a friendliness researcher.

comment by Alicorn · 2010-09-23T14:14:26.314Z · LW(p) · GW(p)

(Deleting this post, or the one I'm replying to, would count),

Eliezer might delete it anyway, although I don't expect it. You made a threat, not an offer. If the fiasco with Roko didn't convince you that he takes decision theory seriously, what will?

Replies from: waitingforgodel, waitingforgodel
comment by waitingforgodel · 2010-09-23T20:21:13.641Z · LW(p) · GW(p)

Threats and offers look identical to me after thinking about this some more -- try swapping them out of a couple sentences.

They're both simply telling someone that you'll do something based on what they do.

Am I missing something?

(Please don't vote unless you've read the whole thread found here)

Replies from: wedrifid
comment by wedrifid · 2010-09-23T20:24:40.956Z · LW(p) · GW(p)

(Please don't vote unless you've read the whole thread found here)

I did not choose to downvote the parent based on this but I was tempted. I may have upvoted without the prescription.

Replies from: waitingforgodel
comment by waitingforgodel · 2010-09-23T20:39:06.376Z · LW(p) · GW(p)

Fair enough, we need to figure out a better way to navigate to the relevant part of "open thread" posts. The load comments above link doesn't load comments below what's above :-/

Usability, speaking the truth, and avoiding redundant comments are much more important than votes to me, if I could type it again i'd go with: please don't reply unless you've read the whole thread.

comment by waitingforgodel · 2010-09-23T14:22:30.141Z · LW(p) · GW(p)

I think the fact that he takes decision theory seriously is why he won't delete it.

Replies from: Alicorn
comment by Alicorn · 2010-09-23T14:27:41.406Z · LW(p) · GW(p)

I don't expect him to delete it. However, I don't expect the threat made in the comment to be among the reasons he does not delete it.

Replies from: waitingforgodel
comment by waitingforgodel · 2010-09-23T14:35:40.520Z · LW(p) · GW(p)

Ahh, okay good. LW & EY are awesome -- as I mentioned in the rest of this thread, I don't want to change any more than the smallest bit necessary to avoid future censorship.

-wfg