Posts

The Market for Lemons: Quality Uncertainty on Less Wrong 2015-11-18T22:06:56.417Z

Comments

Comment by signal on Open thread, Oct. 19 - Oct. 25, 2015 · 2016-01-04T20:59:22.967Z · LW · GW

Did anything come from this? Would love to see that, too!

Comment by signal on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-12-02T09:05:16.797Z · LW · GW

Can you point out your 3-5 favorite books/frameworks?

Comment by signal on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-12-02T09:04:16.371Z · LW · GW

Thanks Lumifer. The Prince is worth reading. However, tranferring his insights regarding princedoms to how to design and spread memeplexes in the 21st century does have its limits. Any more suggestions?

Comment by signal on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-11-30T17:55:06.032Z · LW · GW

Can somebody point out text books or other sources that lead to an increased understanding of how to influence more than one person (the books I know address only 1:1, or presentations)? There are books on how to run successful businesses, etc, but is there overarching knowledge that includes successful states, parties, NGOs, religions, other social groups (would also be of interest for how to best spread rationality...). In the Yvain framework: given the Moloch as a taken, what are good resources that describe how to optimally influence the Moloch with many self-interested agents and for example its inherent game-theoretic problems as long as AI is not up to the task?

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T17:39:25.898Z · LW · GW

Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues. For example, I don't find the question of "shall we enforce a police state" interesting. The answer is "No", case closed, we're done. Notice that I'm speaking about myself -- you, being a different person, might well be highly interested in extended discussion of the topic.

I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word "interesting" was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: "Discussions of AI sometimes end, where they have serious implications regarding real life." Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won't make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T16:05:43.961Z · LW · GW

I conclude from the discussion that the term "rich" is too vague. The following is mine: I should be surprised to find many LWers who don't find themselves in the top percentage of the Global Richlist and who could not afford cryonics if they made it their lives' goal.

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T15:50:49.698Z · LW · GW

I meant especially in individual members such as described in the point "priorities." Somewhat along the lines that topics in LW are not a representative sample concerning which topics and conclusions are relevant to the individual. In other words: The imaginary guide I write for my children "how to be rational" very much differs from the guide that LW is providing.

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T14:55:57.932Z · LW · GW

Definitely. I am slightly irritated that I missed that. The line spacing and paragraph spacing still seems a bit off compared to other articles. Is there anything I am doing wrong?

Comment by signal on Marketing Rationality · 2015-11-19T14:26:28.233Z · LW · GW

They are, but I still would not wear them. (And no rings for men unless you are married or have been a champion in basketball or wrestling.)

Let's differentiate two cases in whom we may want to address: 1) Aspiring rationalists: That's the easy case. Take an awesome shirt, sneak in "LW" or "pi" somewhere, and try to fly below the radar of anybody who would not like it. A moebius strip might do the same, a drawing of a cat in a box may work but also be misunderstood. 2) The not-yet aspiring rationalist: I assume, this is the main target group of InIns. I consider this way more difficult, because you have to keep the weirdness points below the gain. And you have to convey interest in a difficult-to-grasp concept on a small area. And nerds are still less "cool" than sex, drugs, and sports. A Space X T-Shirt may do the job (rockets are cool), but LW concepts? I haven't seen a convincing solution, but will ask around. Until then, the best solution to me seems to dress as your tribe expects you to find other ways of spreading the knowledge.

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T13:52:24.485Z · LW · GW

Fair Enough. Maybe I should take Elon Musk out, he has in WBW found a way to push the value of advertising beyond his the cost of his time spent. If Zuckerberg posts to, I will be fully falsified. To compensate, I introduce typical person X whose personal cost-benefit analysis from posting an article is negative. I still argue that this is the standard.

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T13:46:53.026Z · LW · GW

I do not think that there exists a perfect rational world. My next article will emphasize that. I do think that there is a rational attire which is on average more consistent than the average one presented on LW and one should strive for it. I did not get the point of your presupposition though it seems obvious to you, LWers are not more rational?

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-19T13:36:41.181Z · LW · GW

I am not sure of the point here. I read it as "I can imagine a perfect world and LW is not it". Well, duh.

No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.

There are also a lot of words (like "wrong") that the OP knows the meaning of, but I do not. For example, I have no idea what are "wrong opinions" which, apparently, rational discussions have a tendency to support. Or what is that "high relevancy" of missing articles -- relevancy to whom?

High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC's Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity. Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.

And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?

No. Nash probably wouldn't cooperate, even though he understood game theory and I wouldn't blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).

Comment by signal on The Market for Lemons: Quality Uncertainty on Less Wrong · 2015-11-18T23:42:17.800Z · LW · GW

I do agree. The point was originally "selfishness or effort" which would have avoided the misunderstanding. I think for Musk, the competitive aspect is definitely less important than the effort aspect (he is surely one of those persons for whom "the value of time approaches infinity"). However, I doubt that Musk would give away patents if he didn't see an advantage in doing that.

Comment by signal on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-18T22:19:59.198Z · LW · GW

Thanks. It is now online in the discussion section: "The Market for Lemons."

Comment by signal on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-18T20:29:22.976Z · LW · GW

As soon as I have two Karma points, I will post a 2000 word article on bias in most LW posts (which I would love to have your feedback on) with probably more to follow. However, I don't want to search for some more random rationality quotes to meet that requirement. Note to the administrators: Either you are doing a fabulous job at preventing multiple accounts or registration is currently not working (tried multiple devices, email addresses, and other measures).

Comment by signal on Rationality Quotes Thread October 2015 · 2015-11-18T19:16:04.390Z · LW · GW

Jane Goodall has some interesting oberservations regarding infanticide among chimpanzees in her book "Through a Window." While chimpanzees will attack females that are strangers to a group violently, their infants will only, and in rare instances, die as casualties, but not be directly attacked. Infanticide within a community has only been observed in a few cases and all perpetrated by the same female individual and her daughter. However, she concluded from their behavior that their reason lay solely in the meat of the hunted infants.

Comment by signal on Rationality Quotes Thread October 2015 · 2015-11-18T18:46:46.589Z · LW · GW

The Hollywood version of that is quite popular, sounds less rational though.

Losers always whine about their best, winners go home and f* the prom queen. --Sean Connery, The Rock

Comment by signal on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-18T16:40:18.733Z · LW · GW

LW does seem dying and mainly useful for its old content. Any suggestions for a LW 2.0?

Comment by signal on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-18T16:08:41.005Z · LW · GW

You should. Just started playing with those gums.

Comment by signal on [Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies · 2015-11-18T16:05:13.335Z · LW · GW

That may be the case. But even Lukeprog preferred to be given feedback in a nice wrapping, because after all we are still primates and will appreciate it more.

Comment by signal on Marketing Rationality · 2015-11-18T15:40:50.259Z · LW · GW

Somehow in this context the notion of "picking the low-hanging fruit" keeps coming up. This is prejudgmental and one would have a hard time disagreeing with such an action. Intentional Insights marketing is also discussed on Facebook. I definitely second the thence stated opinion that the suggested T-Shirts and rings are counterproductive and, honestly, ridiculous. Judging the articles is seems more difficult. If the monthly newsletter generates significant readership, this might be useful in the future. However, LW and Rationality FB groups already have their fair share of borderline self-help questions. I would not choose to further push in this direction.

Comment by signal on Rationality Quotes Thread November 2015 · 2015-11-18T14:23:57.427Z · LW · GW

A poor person comes to LW and wants to post a laborious article. LW judges them a high risk, and either declines to post the article or offers it at a high rate of 2 Karma points. PS: Sorry to misuse your comment; it was the most recent one. ;)