Marketing Rationality

post by Viliam · 2015-11-18T13:43:02.802Z · LW · GW · Legacy · 221 comments

Contents

221 comments

What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

Trying to teach someone to think rationally is a long process -- maybe even impossible for some people. It's about explaining many biases that people do naturally, demonstrating the futility of "mysterious answers" on gut level; while the student needs the desire to become stronger, the humility of admitting "I don't know" together with the courage to give a probabilistic answer anyway; resisting the temptation to use the new skills to cleverly shoot themselves in the foot, keeping the focus on the "nameless virtue" instead of signalling (even towards the fellow rationalists). It is a LW lesson that being a half-rationalist can hurt you, and being a 3/4-rationalist can fuck you up horribly. And the online clickbait articles seem like one of the worst choices for a medium to teach rationality. (The only worse choice that comes to my mind would be Twitter.)

On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world. Not to be replaced by something more lesswrongish, but simply by anything else that usually appears in the given medium. Would pressing that button make the world a more sane place? What would have happened if someone had pressed that button hundred years ago? In other words, I'm trying to avoid the "nirvana fallacy" -- I am not asking whether those articles are the perfect vehicle for x-rationality, but rather, whether they are a net benefit or a net harm. Because if they are a net benefit, then it's better having them, isn't it?

Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire. Is it really that wrong? Aren't people already having absurdly wrong ideas about rationality? Remember all the "straw Vulcans" produced by the movie industry; Terminator, The Big Bang Theory... Rationality already is associated with being a sociopathic villain, or a pathetic nerd. This is where we are now; and the "rationality" clickbait, however sketchy, cannot make it worse. Actually, it can make a few people interested to learn more. At least, it can show people that there is more than one possible meaning of the word.

To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup. But if we look at the larger picture, that specific outgroup (people who procrastinate by reading clickbaity self-improvement articles) actually aren't that different from us. They may actually be our nearest neighbors in the human intellectual space. So what some of us (including myself) feel here is the uncanny valley. Looking at someone so similar to ourselves, and yet so dramatically different in some small details which matter to us strongly, that it feels creepy.

Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention. So why not accept the imperfection of the world, and just do what we can.

As a sidenote, I don't believe we are at risk of having an "Eternal September" on LessWrong (more than we already have). More people interested in rationality (or "rationality") will also mean more places to debate it; not everyone will come here. People have their own blogs, social network accounts, et cetera. If rationality becomes the cool thing, they will prefer to debate it with their friends.

EDIT: See this comment for Gleb's description of his goals.

221 comments

Comments sorted by top scores.

comment by query · 2015-11-19T20:14:38.890Z · LW(p) · GW(p)

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
  • By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
  • Gleb's writing style strikes me as very unauthentic feeling. Let me be clear I don't mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating "rationality" with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.

An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying "Oh god don't give me that '6 weird tips' bullshit about 'rational thinking', and spare me your godawful rhetoric, gtfo."

Like I said at the start, I don't know which way it swings, but those are my thoughts and concerns. I imagine they're not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I'm not sure of a good way to collect evidence about this besides running absurdly large long-term "consumer" studies.

I do imagine he plans to continue his efforts, and thus we'll find out eventually how this turns out.

Replies from: Gleb_Tsipursky, Tem42, Evan_Gaensbauer, None
comment by Gleb_Tsipursky · 2015-11-20T04:56:52.198Z · LW(p) · GW(p)

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional - euphemisms that do not associate rationality as such with what we're doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. This writing style is much more natural for me. So is this.

However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it's necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.

This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:

Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.

Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don't fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Replies from: John_Maxwell_IV, query
comment by John_Maxwell (John_Maxwell_IV) · 2015-11-24T05:41:35.981Z · LW(p) · GW(p)

One idea is to try to teach your audience about overconfidence first, e.g. the way this game does with the calibration questions up front. See also.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T22:07:11.452Z · LW(p) · GW(p)

Nice idea! Thanks for the suggestion. Maybe also a Caplan Test.

Replies from: PipFoweraker
comment by PipFoweraker · 2015-11-26T22:17:00.354Z · LW(p) · GW(p)

I'll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset.

I would note that the otherwise-awesome Adventures in Cognitive Biases' calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I've introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-27T04:24:22.568Z · LW(p) · GW(p)

Thanks!

comment by query · 2015-11-21T05:43:26.833Z · LW(p) · GW(p)

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Effectively no. I understand that you're aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you've just said aren't different in gestalt from what I've read from you.

To be potentially more helpful, here's a few ways the arguments you just made fall flat for me:

I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

Connectivity to the rationalist movement or "rationality" keyword isn't necessary to immunize people against the ideas. You're right that if you literally never use the word "bias" then it's unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word "bias", but if they respond the same way to the phrase "thinking errors" or realize at some point that's the concept I'm talking about, it's the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.

For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I can't find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don't know if I understand what you're trying to say; to reflect back, are you saying something like "People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value." ? I ... don't know about that, but I won't critique that position here since I may not be understanding.

(...) Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it's not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.

You're clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T00:42:29.027Z · LW(p) · GW(p)

use every deviation from perfection as ammunition against even fully correct forms of good ideas.

As a professional educator and communicator, I have a deep visceral experience with how "fully correct forms of good ideas" are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.

This is why it's not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.

To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the "fully correct forms of good ideas."

This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it's worth their time and energy.

This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here

I can't find any discussion in the linked article about why research is a key way of validating truth claims

The article doesn't discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:

Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood!

This discussion of a study as validating the truth claim proposition of "improving mood=higher willpower" demonstrates - not tells but shows - the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.

Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I'll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.

what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends

I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.

I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?

EDIT: added link to my other comment

Replies from: query
comment by query · 2015-11-23T14:26:15.769Z · LW(p) · GW(p)

EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.

comment by Tem42 · 2015-11-22T03:40:16.193Z · LW(p) · GW(p)

I would argue that your first and third points are not very strong.

I think that it is not useful to protect an idea so that it is only presented in its 'cool' form. A lot of harm is done by people presenting good ideas badly, and we don't want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood.

People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle.

I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer's words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources.

I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T06:59:27.112Z · LW(p) · GW(p)

Agreed with presenting them to intro-level means, so that there is less of an inference gap.

Good idea on subtly introducing readers to more and better sites for further and better reading, updating on this to do so more often in my articles. Thanks!

comment by Evan_Gaensbauer · 2015-11-22T03:06:01.292Z · LW(p) · GW(p)

This comment captures my intutions well. Thanks for writing this. It's weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb's work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that's easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that's a good or ill effect is a judgment I'll leave for you.

On the other hand, when I put on my rationality community hat, I feel the same way about Gleb's work as you do. It's uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-22T17:34:04.514Z · LW(p) · GW(p)

An important way I think about my work in the rationality sphere is cognitive altruism.

In a way, it's not different than effective altruism. When promoting effective giving, I encourage people to think rationally about their giving. I pose to them the question of how (and whether) they currently think about their goals in giving, the impact of their giving, and the quality of the charities to which they give, encouraging them to use research-based evaluations from GiveWell, TLYCS, etc. The result is that they give to effective charities.

Similarly, I encourage people to think rationally about their life and goals in my promotion of rationality. The results is that they make better decisions about their lives and are more capable of meeting their goals, including being more long-term oriented and thus fighting the Moloch problem. For example, here is what one person got out of my book on finding meaning and purpose by orienting toward one's long-term goals. He is now dedicated to focusing his life on helping other people have a good life, in effect orienting toward altruism.

In both cases, I take the rational approach of using methods from content marketing that have been shown to work effectively in meeting the goals of spreading complex information to broad audiences. It's not different in principle.

I'm curious whether this information helps you update one way or another in your assessment of Intentional Insights.

comment by [deleted] · 2016-02-07T20:58:31.086Z · LW(p) · GW(p)

By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.

My immediate reaction was to disagree. I think most people don't listen to arguments from authority often enough; not too often. So I decided to search "arguments from authority" on LessWrong, and the first thing I came to was this article by Anna Salamon:

Another candidate practice is the practice of only passing on ideas one has oneself verified from empirical evidence (as in the ethic of traditional rationality, where arguments from authority are banned, and one attains virtue by checking everything for oneself). This practice sounds plausibly useful against group failure modes where bad ideas are kept in play, and passed on, in large part because so many others believe the idea (e.g. religious beliefs, or the persistence of Aristotelian physics in medieval scholasticism; this is the motivation for the scholarly norm of citing primary literature such as historical documents or original published experiments). But limiting individuals’ sharing to the (tiny) set of beliefs they can themselves check sounds extremely costly.

She then suggests separating out knowledge you have personally verified from arguments from authority knowledge to avoid groupthink, but this doesn't seem to me to be a viable method for the majority of people. I'm not sure it matters if non-experts engage in groupthink if they're following the views of experts who don't engage in groupthink.

Skimming the comments, I find that the response to AnnaSalamon's article was very positive, but the response to your opposite argument in this instance also seems to be very positive. In particular, AnnaSalamon argues that the share of knowledge which most people can or should personally verify is tiny relative to what they should learn. I agree with her view. While I recognize that there are different people responding to AnnaSalamon's comments than the one's responding to your comments, I fear that this may be a case of many members of LessWrong interpreting arguments based on presentation or circumstance rather than on their individual merits.

comment by jsteinhardt · 2015-11-20T17:15:28.992Z · LW(p) · GW(p)

My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.

Replies from: Raelifin, Gleb_Tsipursky
comment by Raelifin · 2015-11-23T14:35:40.299Z · LW(p) · GW(p)

I just wanted to interject a comment here as someone who is friends with Gleb in meatspace (we're both organizers of the local meetup). In my experience Gleb is kinda spooky in the way he actually updates his behavior and thoughts in response to information. Like, if he is genuinely convinced that the person who is criticizing him is doing so out of a desire to help make the world a more-sane place (a desire he shares) then he'll treat them like a friend instead of a foe. If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him.

I'm probably biased in that he's my friend. He certainly struggles with it sometimes, and fails too. Critical scrutiny is important, and I'm really glad that Viliam made this thread, but it kinda breaks my heart that this spirit of actually taking ideas seriously has led to Gleb getting as much hate as it has. If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier.

(And yes, Gleb, I know that we're not optimizing for warm-fuzzies. It still sucks sometimes.)

Anyway, I guess I just wanted to put in my two (biased) cents that Gleb's a really cool guy, and any appearance of a status-hungry manipulator is just because he's being agent-y towards good ends and willing to get his hands dirty along the way.

Replies from: Gleb_Tsipursky, ChristianKl, Lumifer, OrphanWilde
comment by Gleb_Tsipursky · 2015-11-25T02:17:34.876Z · LW(p) · GW(p)

Yeah, we're not optimizing for warm-fuzzies from Less Wrongers, but for a broad impact. Thanks for the sympathetic words, my friend.

This road of effective cognitive altruism is a hard one to travel, neither being really appreciated, at least at first, by the ones who we are trying to reach, nor by the ones among our peers whose ideas we are bringing to the masses.

Well, if my liver gets consumed daily by vultures, this is the road I've chosen. Glad to have you by my side, and hope this doesn't rebound on you much.

Replies from: Lumifer
comment by Lumifer · 2015-11-25T03:27:38.899Z · LW(p) · GW(p)

if my liver gets consumed daily by vultures, this is the road I've chosen

/facepalm

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T03:58:53.530Z · LW(p) · GW(p)

Thanks for proving my point above :-)

comment by ChristianKl · 2015-11-25T11:08:01.651Z · LW(p) · GW(p)

If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him.

I'm not sure whether that's a good idea. Writing that feels weird to the author is also going to transmit that vibe to the audience. We don't want rationality to be associated with feeling weird and unpleasant.

Replies from: Lumifer
comment by Lumifer · 2015-11-25T15:34:23.522Z · LW(p) · GW(p)

We don't want rationality to be associated with feeling weird

/thinks about empty train tracks and open barn doors... :-/

comment by Lumifer · 2015-11-23T16:15:06.879Z · LW(p) · GW(p)

any appearance of a status-hungry manipulator

I can't speak for other people, of course, but he never looked much like a manipulator. He looks like a guy who has no clue. He doesn't understand marketing (or propaganda), the fine-tuned practice of manipulating people's minds for fun and profit. He decided he needs to go downmarket to save the souls drowning in ignorance, but all he succeeded in doing -- and it's actually quite impressive, I don't think I'm capable of it -- is learning to write texts which cause visceral disgust.

Notice the terms in which people speak of his attempts. It's not "has a lot of rough edges", it's slime and spiders in human skin and "painful" and all that. Gleb's writing does reach System I, but the effect has the wrong sign.

Replies from: Raelifin
comment by Raelifin · 2015-11-23T17:39:41.812Z · LW(p) · GW(p)

Ah, perhaps I misunderstood the negative perception. It sounds like you see him as incompetent, and since he's working with a subject that you care about that registers as disgusting?

I can understand cringing at the content. Some of it registers that way to me, too. I think Gleb's admitted that he's still working to improve. I won't bother copy-pasting the argument that's been made elsewhere on the thread that the target audience has different tastes. It may be the case that InIn's content is garbage.

I guess I just wanted to step in and second jsteinhardt's comment that Gleb is a very growth-oriented and positive, regardless of whether his writing is good enough.

Replies from: Lumifer
comment by Lumifer · 2015-11-23T17:51:26.673Z · LW(p) · GW(p)

It sounds like you see him as incompetent, and since he's working with a subject that you care about that registers as disgusting?

Not only that -- let me again stress the point that his texts cause the "Ewwww" reaction, not "Oh, this is dumb". The slime-and-snake-oil feeling would still be there even if he were writing in the same way about, say, the ballet in China.

As to "positive", IlyaShpitser mentioned chutzpah which I think is a better description :-/

comment by OrphanWilde · 2015-11-23T18:40:26.384Z · LW(p) · GW(p)

If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier.

Yes. That's what the status quo is, and how it works. More, there's are multiple levels to the reasons for its existence, and the implicit suggestion that sticking to the status quo would be a tragedy neglect those reasons in favor of romantic notions of fixing the world.

Replies from: Tem42
comment by Tem42 · 2015-11-23T22:45:38.953Z · LW(p) · GW(p)

Call me a helpless romantic, but LessWrong is supposed to have a better status quo.

Replies from: Gleb_Tsipursky, OrphanWilde
comment by Gleb_Tsipursky · 2015-11-24T22:16:31.301Z · LW(p) · GW(p)

Yeah, totally agreed. The point of LessWrong, to me at least, is to improve the status quo, and keep improving.

comment by OrphanWilde · 2015-11-23T23:32:38.204Z · LW(p) · GW(p)

Deep wisdom.

comment by Gleb_Tsipursky · 2015-11-23T07:09:38.479Z · LW(p) · GW(p)

Thank you, I really appreciate it! I try to stay positive and seek optimizing opportunities :-)

comment by Elo · 2015-11-20T03:04:06.209Z · LW(p) · GW(p)

In writing this I considered the virtue of silence, and decided to voice something explicitly.

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Before today I hadn't read deeply into the articles published by Gleb. Owing to this comment:

http://lesswrong.com/lw/mze/marketing_rationality/cwki

and

http://lesswrong.com/lw/mz4/link_lifehack_article_promoting_lesswrong/cw8n

I explicitly just read a handful of Gleb's articles. Prior to this I have just avoided getting in his way (virtue of silence - avoid reading means avoiding being critical and avoid judging someone who is trying to make progress)

These here (to be clear):

I don't like any of them. I find the quality of the rationality to be weak; I find the prose to be varying degrees of spider-creepy (although not as bad as OrphanWilde finds things). If I had a button that I could push to make these go away today I would. I would also be disheartened if Gleb stopped trying to do what he is trying to do. (this is a summary of my experiences with these articles. I can break them down but that would take longer to do)

I believe in spreading rationality; I just need the material to pass my bullshit meters and preferably be right up there as Bulletproof if it can be done. Unfortunately the process of generating material is literally hard work that I want to not do (for the most part), and I expect other people also want to avoid doing hard work. (I sometimes do hard work, and sometimes find work-arounds for doing it anyway, but it's still hard. If rationality were easy/automatic; more would already be doing it)

Hopefully this adds volume to the side of the discussion opposing Gleb's work so far; without sounding like it's attacking...

Something said earlier:

[an article Gleb wrote...] was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website.

I wanted to add that this is a pretty low number for clickbait. almost worth considering a "failed clickbait" to me.

Replies from: Viliam, Gleb_Tsipursky
comment by Viliam · 2015-11-20T14:34:10.363Z · LW(p) · GW(p)

If rationality is ready to outreach it should be doing it in an as bulletproof way as possible.

Why?

Now that we know that Newtonian physics was wrong, and Einstein was right, would you support my project to build a time machine, travel to the past, and assassinate Newton? I mean, it would prevent incorrect physics from being spread around. It would make Einstein's theory more acceptable later; no one would criticize him for being different from Newton.

Okay, I don't really know how to build a time machine. Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!

Seems to me that I often see the sentiment that we should raise people from some imaginary level 1 directly to level 3, without going through level 2 first, because... well, because level 3 is better than level 2, obviously. And if those people perhaps can't make the jump, I guess they simply were not meant to be helped.

This is why I wrote about "the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons". We are (or imagine ourselves to be) at level 3, and all levels below us are equally deplorable. Helping someone else to get on level 3, that's a worthy endeavor. Helping people get from level 1 to level 2, that's just pathetic, because the whole level 2 is pathetic. Even if we could do that at a fraction of the cost.

Maybe that's true when building a superhuman artificial intelligence (better getting it hundred years later than getting it wrong), but it doesn't apply for most areas of human life. Usually, an improvement is an improvement, even when it's not perfect.

Making all people rationalists could be totally awesome. But making many stupid people slightly less stupid, that's also useful.

Replies from: ChristianKl, Vaniver, OrphanWilde
comment by ChristianKl · 2015-11-20T20:31:57.736Z · LW(p) · GW(p)

Let's start with a false statement from one of Gleb's articles:

Intuitively, we feel our mind to be a cohesive whole, and perceive ourselves as intentional and rational thinkers. Yet cognitive science research shows that in reality, the intentional part of our mind is like a little rider on top of a huge elephant of emotions and intuitions. This is why researchers frequently divide our mental processes into two different systems of dealing with information, the intentional system and the autopilot system.

What's false? Researchers don't use the terms "intentional system" and "autopilot system".

Why is that the problem? Aren't the terms near enough to system I and system II? A person who's interested might want to read additional literature on the subject. The fact that the terms Gleb invented don't match with the existing literature means that it's harder for a person to go from reading Gleb articles to reading higher level material.

If the person digs deeper they will sooner or later run into trouble. The might have a conversation with a genuine neuroscientist and talk about the "intentional system" and "autopilot system" and find that the neuroscientist hasn't heard of making the distinction in those terms. It might take a while till they understand that deception happened but it might hinder them from propressing.

I think talking about system I and system II in the way Gleb does raises the risk of readers coming a way with believing that reflective thinking is superior to intuitive thinking. It suggests that it's about using system II for important issues instead of focusing on aligning system I and system II with each other the way CFAR proposes. The stereotype of people who categorically prefer system II to system I is straw-vulcan's. Level 2 of rationality is not "being a straw-vulcan".

In the article on his website Gleb says:

The intentional system reflects our rational thinking, and centers around the prefrontal cortex, the part of the brain that evolved more recently.

That sounds to me like neurobabble. Kahnmann doesn't say that system II is about a specific part of the brain. Even if it would be completely true, having that knowledge doesn't help a person to be more rational. If you want to make a message as simple as possible you could drop that piece of information without any problem.

Why doesn't he drop it and make the article simpler? Because it helps with pushing an ideology. What other people in this thread called rationality as religion. The rationality that fills someone sense of belong to a group.

I don't see that people rationality get's raised in the process of that. That leads to the question of "what are the basics of rationality?"

I think the facebook group provides sometimes a good venue to understand what new people get wrong. Yesterday one person accused another of being a fake account. I asked the accuser for his credence but he replied that he can't give a probability for something like that. The accuser didn't thought in terms of Cromwell's rule. Making that step from thinking "you are a fake account" to having a mental category of "80% certainty: you are a fake account" is progress. No neuroscience is needed to make that progress.

Rationality for beginners could attempt to teach Cromwell's rule while keeping it as simple as possible. I'm even okay if the term Cromwell's rule doesn't appear. The article can have pretty pictures, but it shouldn't make any false claims.

I admit that "What are the basics of rationality?" isn't an easy question. This community often complicates things. Scott recently wrote what developmental milestones are you missing. That article list 4 milestones with one of them being Cromwell's rule (Scott doesn't name it).

In my current view of rationality other basics might be TAPs, noticing, tiny habits, "how not to be a straw-vulcan" and "have conversation with the goal of learning something new yourself, instead of having the goal of just effecting the other person".

A good way to searching for basics might also be to notice events where you yourself go: "Why doesn't this other person get how the world works, X is obvious to people at LW, why to I have to suffer from living in a world where people don't get X?". I don't think the answer to that question will be that people think that the prefrontal cortex is about system II thinking.

Replies from: hairyfigment, gjm
comment by hairyfigment · 2015-11-22T04:19:37.064Z · LW(p) · GW(p)

I agree with much of this, but that quote isn't a false claim. It does not (quite) say that researchers use the terms "intentional system" and "autopilot system", which seem like sensible English descriptions if for some bizarre reason you can't use the shorter names. Now, I don't know why anyone would avoid the scholarly names when for once those make sense - but I've also never tried to write an article for Lifehack.

What is your credence for the explanation you give, considering that eg the audience may remember reading about many poorly-supported systems with levels numbered I and II - seeing a difference between that and the recognition that humans evolved may be easier for some then evaluating journal citations.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-22T19:56:30.359Z · LW(p) · GW(p)

which seem like sensible English descriptions if for some bizarre reason you can't use the shorter names

The motivation of Kahnmann to use system I and system II isn't to have shorter names. It's that there are existing conceptions among people about words describing mental concepts and he doesn't want to use them.

Wikipedia list from Kahnmann:

In the book's first section, Kahneman describes two different ways the brain forms thoughts:

System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious
System 2: Slow, effortful, infrequent, logical, calculating, conscious

Emotional/logical is a different distinction then intentional/autopilot. Trained people can shut on and off emotions via their intentions and the process has little to do with being logical or calculating.

But even given them new names that scientists don't give them might be a valid move. If you how even do that then you should be open about the fact that you invented new names. Given science public nature I also think that you should be open about why you choose certain terms and choosing new terms should come with an explanation of why you prefer them over alternatives.

The reason shouldn't be that your organisation is named "intentional insights" and that's why you call it the "intentional system". Again that pattern leads to the rationality is about using system II instead of system I position with differ from the CFAR position.

In Gleb's own summary of Thinking Fast and slow he writes:

System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings.

Given that in Kahnmann's framework intentions are generated by system I, calling system II the "intentional system" produces problems.

What is your credence for the explanation you give,

Explanations don't have credence, predictions do. If you specify a prediction I can give you my credence for it.

comment by gjm · 2015-11-20T23:30:05.846Z · LW(p) · GW(p)

It might be worth correcting "Greb" and "Greg" to "Gleb" in that, to forestall confusion.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-21T07:17:08.396Z · LW(p) · GW(p)

Thanks.

comment by Vaniver · 2015-11-20T17:30:35.876Z · LW(p) · GW(p)

Maybe we could just go burn some elementary-school textbooks, because they often contain too simplified information. Sometimes with silly pictures!

Did you ever read about Feynman's experience reading science textbooks for elementary school? (It's available online here.)

There are good and bad ways to simplify.

This is why I wrote about "the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons".

Sure, there are people I'd rather not join the LessWrong community for status reasons. But I don't think the resistance here is about status instead of methodology. Yes, it would be nice to have organizations devoted to helping people get from level 1 to level 2, but if you were closing your eyes and designing such an organization, would it look like this?

comment by OrphanWilde · 2015-11-20T16:21:31.070Z · LW(p) · GW(p)

(Both agreeing with and refining your position, and directed less to you than the audience):

Personally, I'm at level 21, and I'm trying to raise the rest of you to my level.

Now, before you take that as a serious statement, ask yourself how you feel about that proposition, and how inclined you would be to take anything I said seriously if I actually believed that. Think about to what extent I behave like I -do- believe that, and how that changes the way what I say is perceived.

http://lesswrong.com/lw/m70/visions_and_mirages_the_sunk_cost_dilemma/ <- This post, and pretty much all of my comments, had reasonably high upvotes before I revealed what I was up to. Now, I'm not going to say it didn't deserve to get downvoted - I learned a lot from that post that I should have known going into it - but I'd like to point out the fundamental similarities, but scaled up a level, between what I do there, and typical rationalist "education". "Here's a thing. It was a trick! Look at how easily I tricked you! You should now listen to what I say about how to avoid getting tricked in the future." Worse, cognitive dissonance will make it harder to fix that weakness in the future. As I said, I learned a -lot- in that post; I tried to shove at least four levels of plots and education into it, and instead, turned people off with the first or second one. I hope I taught people something, but in retrospect, and far removed from it, I think it was probably a complete and total failure which mostly served to alienate people from the lessons I was attempted to impart.

The first step to making stupid people slightly less stupid is to make them realize the way in which they're stupid in the first place, so that they become willing to fix it. But you can't do that, because, obviously, people really dislike being told they're stupid. Because there are some issues inherent in approaching other people with the assumption that they're less than you, and that they should accept your help in raising them up. You're asserting a higher status than them. They're going to resent that, and cognitive dissonance is going to make them decide that the thing you're better at, either you aren't, or that it isn't that important. So if you think that you can make "stupid people slightly less stupid", you're completely incompetent at the task.

But... show them that -you- are stupid, and show them you becoming less stupid, and cognitive dissonance will tell them that they were smarter than you, and that they already knew what you were trying to teach them. That's a huge part of what made the Sequences so successful - riddled throughout it were admissions of Eliezer's own weakness. "This is a mistake I made. This is what I realized. This is how I started to get past that mistake." What made them failures, however, is the way they made those who read them feel Enlightened, like they had just Leveled Up twenty times and were now far above ordinary plebeians. The critical failure of the Sequences is that they didn't teach humility; the lesson you -should- come away from them with is the idea that, however much Less Wrong you've become, you're still deeply, deeply wrong. And that's okay.

Which provokes a dilemma. Everybody who wants to teach rationality to others, because it leveled them up twenty times and look at those stupid people falling prey to the non-central fallacy on a constant basis, are completely unsuitable to do so.

So did I succeed? Or did I fail? And why?

Replies from: Gleb_Tsipursky, Vaniver, Lumifer
comment by Gleb_Tsipursky · 2015-11-23T07:08:19.152Z · LW(p) · GW(p)

This is a pretty confusing point. I have plenty of articles where I admit my failures and discuss how I learned to succeed.

Secondly, I have only started publishing on Lifehacker - published 3 so far - and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.

BTW, curious if any of these discussions have caused you to update on any of your claims to any extent?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-23T14:24:41.218Z · LW(p) · GW(p)

I now assign negligible odds to the possibility that you're a sociopath (used as a shorthand for any of a number of hostile personality disorders) masquerading as a normal person masquerading as a sociopath, and somewhat lower odds on you being a sociopath outright, with the majority of assigned probability concentrating on "normal person masquerading as sociopath" now. (Whether that's how you would describe what you do or not, that's how I would describe it, because the way you write lights up my "Predator" alarm board like a nearby nuke lights up a "Check Engine" light.)

The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.

Demonstrable evidence that you do so better than average isn't the same as demonstrable evidence that you do so well.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T21:48:08.965Z · LW(p) · GW(p)

Thanks for sharing about your updating! I am indeed a normal person, and have to put a lot of effort into this style of writing for the sake of what I perceive as a beneficial outcome.

I personally have updated away from you trolling me and see you as more engaged in a genuine debate and discussion. I see we have vastly different views on the methods of getting there, but we do seem to have broadly shared goals.

Fair enough on different interpretations of the word "well." As I said, my articles have done twice as well as the average for Lifehack articles, so we can both agree that it is demonstrable evidence of a significant and above-average level of competency on an area where I am just starting - 3 articles so far - although the term "well" is more fuzzy.

comment by Vaniver · 2015-11-20T17:42:25.593Z · LW(p) · GW(p)

The critical failure of the Sequences is that they didn't teach humility; the lesson you -should- come away from them with is the idea that, however much Less Wrong you've become, you're still deeply, deeply wrong.

Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A.

The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it's right there in the name that one is trying to asymptotically remove wrongness.

The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn't give people gold stars that say "yep, you got the humility part down," and unsurprisingly people are not as good at determining that themselves as they'd like to be.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-20T18:41:59.693Z · LW(p) · GW(p)

Mmm. I typically dislike framings where A teaches B, instead of framings where B learns from A.

Then perhaps you've framed the problem you're trying to solve in this thread wrong. [ETA: Whoops. Thought I was talking to Villiam. This makes less-than-sense directed to you.]

The Sequences certainly tried to teach humility, and some of us learned humility from The Sequences. I mean, it's right there in the name that one is trying to asymptotically remove wrongness.

I don't think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again. Eliezer learned humility through making mistakes, mistakes he learned from; the practice of teaching rationality is the practice of having students skip those mistakes.

The main failing, if you want to put it that way, is that this is an online text and discussion forum, rather than a dojo. Eliezer doesn't give people gold stars that say "yep, you got the humility part down," and unsurprisingly people are not as good at determining that themselves as they'd like to be.

He shouldn't, even if he could.

Replies from: Vaniver
comment by Vaniver · 2015-11-20T18:55:50.942Z · LW(p) · GW(p)

Then perhaps you've framed the problem you're trying to solve in this thread wrong.

Oh, I definitely agree with you that trying to teach rationality to others to fix them, instead of providing a resource for interested people to learn rationality, is deeply mistaken. Where I disagree with you is the (implicit?) claim that the Sequences were written to teach instead of being a resource for learning.

I don't think that humility can be taught in this sense, only earned through making crucial mistakes, over and over again.

Mmm. I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don't have to be. I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they're noticed, rather than when it's no longer possible to maintain them.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-20T19:37:35.231Z · LW(p) · GW(p)

Ah! My apologies. Thought I was talking to Villiam. My responses may have made less than perfect sense.

I favor Bismarck on this front. It certainly helps if the mistakes are yours, but they don't have to be.

You can learn from mistakes, but you don't learn what it feels like to make mistakes (which is to say, exactly the same as making the right decision).

I also think it helps to emphasize the possibility of learning sooner rather than later; to abort mistakes as soon as they're noticed, rather than when it's no longer possible to maintain them.

That's where humility is important, and where the experience of having made mistakes helps. Making mistakes doesn't feel any different from not making mistakes. There's a sense that I wouldn't make that mistake, once warned about it - and thinking you won't make a mistake is itself a mistake, quite obviously. Less obviously, thinking you will make mistakes, but that you'll necessarily notice them, is also a mistake.

comment by Lumifer · 2015-11-20T17:11:25.083Z · LW(p) · GW(p)

The solution to the meta-level confusion (it's turtles all the way down, anyway) is to spend a few years building up an immunity to iocane powder.

comment by Gleb_Tsipursky · 2015-11-20T05:04:25.571Z · LW(p) · GW(p)

I address the concerns about the writing style and content in my just-written comment here. Let me know your thoughts about whether that helps address your concerns.

Regarding clickbait and sharing, let's actually evaluate the baseline. I want to highlight that 2K is quite a bit higher than the average for a Lifehack article. A typical article does not rise above 1K, and that's considered pretty good. So my articles have done really well by comparison to other Lifehack articles. Since that's the baseline, I'm pretty happy with where the sharing is.

Why would you be disheartened if I stopped what I was trying to do?

EDIT: Also forgot to add that some of the articles you listed were not written by me but by another aspiring rationalist, so FYI.

Replies from: Elo
comment by Elo · 2015-11-20T06:41:45.348Z · LW(p) · GW(p)

no that does not answer to the issues I raised.

I am now going to take apart this article:

www.intentionalinsights.org/7-surprising-science-based-hacks-to-build-your-willpower

7 Surprising Science-Based Hacks To Build Your Willpower

Tempted by that second doughnut? Struggling to resist checking your phone? Shopping impulsively on Amazon? Slacking off by reading BuzzFeed instead of doing work? What you need is more willpower! Recent research shows that strengthening willpower is the real secret to the kind of self-control that can help you resist temptations and achieve your goals. The great news is that scientists say strengthening your willpower is not as hard as you might think. Here are 7 research-based hacks to strengthen your willpower!

  1. Smile :-)

Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood! Smile or laugh, watch a funny video or two.

low willpower resisting BuzzFeed and doughnuts? You should improve your mood - try some Buzzfeed or doughnuts.

  1. Clench Your Fist

Clench your fists or partake in another type of activity where you exercise self-control. Studies say that exercising self-control in any physical domain causes you to become more disciplined in other facets of life. So do whatever works for you to exercise self-control when you are trying to fight temptations: clench your fist, squeeze your eyes shut, or you can even hold in your pee, just like UK Prime Minister David Cameron.

  1. Meditate

Photo Credit: Gleb Tsipursky meditating in the park

Meditation is great for a lot of things – reducing stress, increasing focus, managing emotions. Now research suggests it even helps us build willpower! With all these benefits, can you afford not to meditate? An easy way to get started is to spend 10 minutes a day sitting in a calm position and focusing on your breath.

  1. Reminders

Our immediate desires to give in to temptations make it really challenging to resist them. Our emotional desires seem like a huge elephant and our rational self is like a small elephant rider by comparison. However, one way to steer the elephant is to set in physical reminders in advance to remind ourselves of what our rational self wanted to do. So put a note on your fridge that says “only one doughnut” or set an alarm clock to buzz when you want to stop playing video games.

  1. Eat

Did you know that your willpower is powered by food? No wonder’s it’s so hard to diet! When we don’t eat, our willpower goes down the drain. The best cure is a meal rich in protein, which enables the most optimal willpower.

  1. Self-Forgiveness

How is self-forgiveness connected to willpower? Well, what the science shows is that feelings of regret deplete your willpower. This is why those who eat a little too much ice cream and feel regret are then much more likely to just let themselves go and eat the whole pint or even gallon! Instead, when you give in to temptation, be compassionate toward yourself and forgive yourself. That way, you’ll have more willpower going forward!

  1. Commitment

The most important thing to strengthen your willpower is commitment to doing so! Only by committing to improving your willpower every day will you be able to take the steps described above. To do so, evaluate your situation and why you want to strengthen your willpower, make a clear decision to work on improving this area, and set a long-term goal for your willpower improvement to have the kind of intentional life that you want.

Then break down this goal into specific and concrete steps that you will take based on the strategies described above. Research shows this is the best path for you to build your willpower!

So what are the specific and concrete steps that you will take to build your own willpower? Share your planned steps and the strategies that you will use in the comments section below!

To avoid missing out on content that helps you reach your goals, subscribe to the Intentional Insights monthly newsletter.

The generosity of readers like you made this article possible. If you benefited from reading it, please consider volunteering or/and making a tax-deductible contribution to Intentional Insights. Thank you for being awesome!

surprising clickbait title. That's fine.

science-Based No. Bad. Not helpful.

Tempted by that second doughnut? short term reward vs long term dieting goal with less salient rewards. You didn't explain that.

Struggling to resist checking your phone?

how about asking why? rather than attracting someone to ring the yes that's me bells in their head; instead attract them to the I should sort that out bells.

Shopping impulsively on Amazon?

What really? I know these are just your examples; but you should use solid ones. And don't name drop Amazon and BuzzFeed.

Slacking off by reading BuzzFeed instead of doing work? What you need is more willpower! Recent research shows

research from who? the magical scientists

that strengthening willpower is the real secret

real secret kill me now.

to the kind of self-control that can help you resist temptations and achieve your goals. The great news is that scientists say

scientists say are they the same ones that were doing research before? Or different ones.

strengthening your willpower is not as hard as you might think. Here are 7 research-based hacks to strengthen your willpower!

  1. Smile :-)

not inherently bad a suggestion; but really has not a lot to do with willpower. If you are a human who never smiles; you shouldn't be looking at willpower things; you should be solving that problem first.

Smiling and other mood-lifting activities help improve willpower. In a recent study,

a recent study not linked to

scientists them again!

first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later!

in what way does this connect smiling and willpower?

So next time you need to resist temptation, improve your mood! Smile or laugh, watch a funny video or two.

no. This is reaching not-even-wrong territory.

  1. Clench Your Fist

nothing wrong with this suggestion but it's a bit of a weak effect.

Clench your fists or partake in another type of activity where you exercise self-control. Studies say

studies say Which ones? Where?

that exercising self-control in any physical domain causes you to become more disciplined in other facets of life.

hey wait - the nebulous idea of willpower, the draw of the science based, the entire idea that there are super secret answers is entirely the opposite of what rationality wants to convey. Truth is - if there were any ideas that really worked; you would already know about them; and probably already be using them. The entire idea of maybe if I keep searching for ideas I can uncover a secret truth is wrong. It's an overstretch of exploration in the exploration-exploitation dilemma. The worst part is partial reinforcement helps to reinforce addictive behaviours (like endlessly browsing buzzfeed) more than anything else.

So do whatever works for you to exercise self-control when you are trying to fight temptations: clench your fist, squeeze your eyes shut, or you can even hold in your pee

just like UK Prime Minister David Cameron. What? Not a good thing to be referencing.


I'm gonna stop because this feels too much like a waste of time.

I realise it's easier to criticise than generate content; I plan to try to do that in contrast to this article if I get the time.

Replies from: Kaj_Sotala, ChristianKl, Gleb_Tsipursky
comment by Kaj_Sotala · 2015-11-20T10:15:40.941Z · LW(p) · GW(p)

So next time you need to resist temptation, improve your mood! Smile or laugh, watch a funny video or two.

no. This is reaching not-even-wrong territory.

On what basis? It matches my experience, something similar has been discussed on LW before, and it would seem to match various theoretical considerations about human psychology.

Truth is - if there were any ideas that really worked; you would already know about them; and probably already be using them.

This seems like a very strong and mostly unjustified claim.

E.g. even something like the Getting Things Done system, which works very well for lots and lots of people and has been covered and promoted in numerous places, is still something that's relatively unknown outside the kinds of circles where people are interested in this kind of thing. A lot of people in the industrialized world could benefit from it, but most haven't even tried it.

Ideas spread slowly, and often at a rate that's only weakly correlated with their usefulness.

comment by ChristianKl · 2015-11-20T20:54:50.104Z · LW(p) · GW(p)

Given that you didn't address the following, let me address it

Did you know that your willpower is powered by food? No wonder’s it’s so hard to diet! When we don’t eat, our willpower goes down the drain. The best cure is a meal rich in protein, which enables the most optimal willpower.

To be that raises a harmful untruth flag. Roy Baumeister suggests that meals help with willpower through glucose. To me the claim that it's protein that builds willpower looks unsubstantiated. It certainly not backed up by the Israeli judges.

Where does the harm come into play? I understand the nutritional consensus to be that most people eat meals with too much protein. Nutrition science is often wrong, but that means that one should be careful about giving people the advice of raising the protein content of their meals.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-11-20T21:53:12.342Z · LW(p) · GW(p)

The nutritional consensus is also not about optimizing willpower. I would be somewhat skeptical of the claim that the willpower optimizing meal just luckily happens to be identical to the health optimizing meal.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-20T21:54:26.583Z · LW(p) · GW(p)

I would be somewhat skeptical of the claim that the willpower optimizing meal just luckily happens to be identical to the health optimizing meal.

I haven't made that claim.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-11-20T21:55:12.718Z · LW(p) · GW(p)

In the sense that you didn't make it, neither did I say that you did.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-20T22:04:32.918Z · LW(p) · GW(p)

My argument is about two issues:
1) There no reason to belief that protein increases willpower.
2) If you tell people a lie to make them improve their diet it's at least defensible they end of healthier as a result. If your lie however makes them eat a less healthy diet you really screwed up.

Apart from that, I don't believe that eating glucose directly to increase your willpower is a good idea or healthy.

comment by Gleb_Tsipursky · 2015-11-23T06:46:15.604Z · LW(p) · GW(p)

science-Based No. Bad. Not helpful.

Why not helpful?

real secret kill me now

I speak in the tone of listicle articles reluctantly, as I wrote above. It's icky, but necessary to get this past the editors at Lifehack and elsewhere.

a recent study not linked to

Actually, it is linked to. You can check out the article for the link, but here is the link itself if you're curious: www.albany.edu/~muraven/publications/promotion files/articles/tice et al, 2007.pdf

Smile or laugh, watch a funny video or two. no. This is reaching not-even-wrong territory.

This I just don't get. If experiments say you should watch a funny video, and they do as the link above states, why is this not-even wrong territory?

comment by Gleb_Tsipursky · 2015-11-18T23:29:42.607Z · LW(p) · GW(p)

Thank you for bringing this up as a topic of discussion! I'm really interested to see what the Less Wrong community has to say about this.

Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn't happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.

I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.

The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: "6 Science-Based Hacks for Growing Mentally Stronger." BTW, editors are usually the ones who write the headline, so I can't "take the credit" for the click-baity nature of the title in most cases.

Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: "Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP."

Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.

We also give interviews to prominent venues on rationality-informed topics: 1, 2.

Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the "About Us" page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.

Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.

Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that:

  • Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
  • Problematic decision making undermines mutual flourishing in a number of life areas.
  • These flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
  • We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the draft version of our Theory of Change.

Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.

The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.

Well, that's all. Look forward to your thoughts! I'm always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)

EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb's ClearerThinking and also about our audience engaging with Less Wrong such as Ella.

Replies from: MrMind, OrphanWilde
comment by MrMind · 2015-11-19T08:59:56.236Z · LW(p) · GW(p)

it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website and elsewhere.

I'm curious: do you use a unified software for tracking the impact of articles through the chain?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T09:12:37.776Z · LW(p) · GW(p)

For how many times the article itself was shared, Lifehack has that prominently displayed on their website. Then, we use Google Analytics, which gives us information on how many people visited out website from Lifehack itself. We can't track them further than that. If you have ideas about how to track them further, especially using free software, I'd be interested in learning about that!

comment by OrphanWilde · 2015-11-19T04:47:24.563Z · LW(p) · GW(p)

Ahem: It's quite rude to downvote Vaniver even as you respond to him. Especially -twice-.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T04:55:17.126Z · LW(p) · GW(p)

I thought his comments were not worth attention, following the general guidelines here.

Replies from: Vaniver, OrphanWilde
comment by Vaniver · 2015-11-19T15:56:40.575Z · LW(p) · GW(p)

Are you encouraging or discouraging me to elaborate?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T18:07:27.311Z · LW(p) · GW(p)

I thought your original comments were not helpful for readers to gain useful information. I am encouraging you to elaborate and hope you will give a clear explanation of your position when you post.

Replies from: Vaniver
comment by Vaniver · 2015-11-20T17:24:33.847Z · LW(p) · GW(p)

I was insufficiently clear: that was a question about your model of my motivation, not what you want my motivation to be. You can say you want to hear more, but if you act against people saying things, which do you expect to have more impact?

But in the spirit of kindness I will write a longer response.


This subject is difficult to talk about because your support here is tepid and reluctant at best, and your detractors are polite.

Now, you might look at OrphanWilde or Clarity and say "you call that polite?"--no, I don't. Those are the only people willing to break politeness and voice their lack of approval in detail. This anecdote about people talking in the quiet car comes to mind; lots of people look at something and realize "this is a problem" but only a few decide it's worth the cost to speak up about it. Disproportionately, those are going to be people who feel the cost less strongly.

There's a related common knowledge point--I might think this is likely net negative, but I don't know how many other people think this is a likely net negative. Only if I know that lots of people think this is a likely net negative, and that they are also aware that this is the sentiment, does it make sense to be the spokesperson for that view. If I know about that dynamic, I can deliberately try to jumpstart the process by paying the costs of establishing common knowledge.

And so by writing a short comment I was hoping to get the best of both worlds--signalling that I think this is likely a net negative and that this is an opinion that should be public, without having to go into the awkward details of why.


That's just the social dynamics. Let's get to the actual content. Why do I think this is likely a net negative? Normally I would write something like this privately, but I'll make it public because we're already having a public discussion.

I agree that it would be nice if the broader population knew more clear thinking techniques. It's not obvious to me that it would be nice if more of the broader population came to LW. I think that deliberative rationality, like discussed on LW, is mostly useful for people with lots of spare CPU cycles and a reflective personality.

Once, I shared some bread I baked with my then-landlord. She liked it, and asked me how I made it, and I said "oh, it's really easy, let me lend you the book I learned from." She demurred; she didn't like reading things, and learned much better watching people do things. Sure, I said, and invited her over the next time I baked some to show her how it's done.

The Sequences is very much "The Way for software engineer-types as radioed back by Eliezer Yudkowsky." I am pessimistic about attempts to get other types of people closer to The Way by translating The Sequences into a language closer to theirs; much more than just the language needs to change, because the inferential gaps are in different places. I strongly suspect your 'typical American' with IQ 100 would get more out of The Way as radioed back by someone closer to them. Byron Katie, with her workshops and her Youtube videos, is the sort of person I would model after if I was targeting a broad market.

I have not paid close attention to the material you've produced because I find it painful. From what little I have seen, I have mostly gotten the impression that it's poorly presented, and am some combination of unwilling and unable to provide you detailed criticism on why. I also think this is more than that I'm not the target audience--I don't have the negative reaction to pjeby that many do, for example, and he has much more of a self-help-style popular approach. To recklessly speculate on the underlying causes, I don't get the impression that you deeply respect or understand your audience, and what you think they want doesn't line up with what they actually want, in a way that seems transparent. It seems like How do you do, fellow kids?.

Standard writing advice is "write what you know." If you want to do rationality for college professors, great! I imagine that your comparative advantage at that would be higher. But just because you don't see people pointing rationality at the masses doesn't mean that's a hole you would be any good at filling. Among other things, I would worry that because you're not the target audience, you won't be aware of what's already there / what your competition is.

Replies from: Gleb_Tsipursky, Lumifer
comment by Gleb_Tsipursky · 2015-11-23T06:26:12.087Z · LW(p) · GW(p)

Thank you for actually engaging with the content.

Only if I know that lots of people think this is a likely net negative, and that they are also aware that this is the sentiment, does it make sense to be the spokesperson for that view. If I know about that dynamic, I can deliberately try to jumpstart the process by paying the costs of establishing common knowledge.

The same effect works if people think this is a net positive. Furthermore, Less Wrong is a quite critical community, with people much more likely to provide criticism than support, as the latter wins less social status points. This is not to cast aspersions on the community at all - there's a reason I participate actively. I like being challenged and updating my beliefs. But let's be honest, this is a community of challenge and debate, not warm fuzzies and kumbayah.

Now let's get to the meat of the matter.

I agree that it would be nice if the broader population knew more clear thinking techniques. It's not obvious to me that it would be nice if more of the broader population came to LW.

I agree that it would not be nice if more of the broader population came to LW, the inferential gap would be way too big, and Endless September sucks. I discuss more in my comment here how that is not the goal I am pursuing, together with other InIn participants. The goal is to simply convey more clear thinking techniques effectively to the broad audience and raise the sanity waterline. For a select few, as that comment describes, they can go up to LW, likely those with a significantly high IQ but lack of sufficient education about how their mind works.

To recklessly speculate on the underlying causes, I don't get the impression that you deeply respect or understand your audience

I am confused by this comment. If I didn't understand my audience, how come my articles are so successful with them? Believe me, I have extensively researched the audiences there, and how to engage them well. You fail at my mind if you think my writing would be only engaging to college professors. And please consider who you are talking to when you discuss writing advice. I have read many books about writing, and taught writing as part of my college teaching.

As proof, here is evidence. I have only started publishing on Lifehacker - published 3 so far - and my articles way outperform the average of being shared under 1K. This is the average for experienced and non-experienced writers alike. My articles have all been shared over 1K times, and some twice as much if not more. The fact that they are shared so widely is demonstrable evidence that I understand my audience and engage it well.

Has this caused you to update on any of your claims to any extent?

Replies from: Vaniver
comment by Vaniver · 2015-11-23T23:16:13.130Z · LW(p) · GW(p)

Thank you for actually engaging with the content.

You're welcome! Thank you for continuing to be polite.

Has this caused you to update on any of your claims to any extent?

I was already aware of how many times your articles have been shared. I would not base my judgment of a painter's skill with the brush on how many books on painting they had read.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T22:19:30.892Z · LW(p) · GW(p)

I guess the metaphor I would take for the painter is how many of her paintings have sold. That's the appropriate metaphor for how many times the articles were shared. If the painter's goal is to sell paintings with specific content - as it is my goal to have articles shared with specific content not typically read by an ordinary person - then sharing of articles widely indicates success.

comment by Lumifer · 2015-11-20T17:50:26.602Z · LW(p) · GW(p)

+1

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T06:26:28.846Z · LW(p) · GW(p)

I thought upvotes were used for that purpose.

Replies from: malcolmocean, Lumifer
comment by MalcolmOcean (malcolmocean) · 2015-12-05T21:44:53.971Z · LW(p) · GW(p)

By design, upvotes don't show public approval. Commenting +1 does.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-12-06T01:26:19.793Z · LW(p) · GW(p)

Ah, good point

comment by Lumifer · 2015-11-23T15:49:30.490Z · LW(p) · GW(p)

...yes, and?

comment by OrphanWilde · 2015-11-19T04:55:56.602Z · LW(p) · GW(p)

No, you were angry that he was criticizing you. You have consistently downvoted everyone who criticized you.

Replies from: Gleb_Tsipursky, Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T04:57:07.050Z · LW(p) · GW(p)

I see that you can read my mind and my votes. Glad you have that ability. Can you please provide evidence of what I am thinking and how I am voting? Thanks!

You can't and you won't. Your statements are patently false. I have in fact not consistently downvoted everyone who criticized me. I try to follow the general guidelines on voting. Please avoid making false statements in the future.

Anyway, not really interested in engaging in a conversation where you have vague accusations again, which are part of your general agenda against Intentional Insights, including making abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be.

EDIT: Edited to include the link to the general guidelines on voting.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T13:45:25.269Z · LW(p) · GW(p)

I see that you can read my mind and my votes. Glad you have that ability

It's part of the Dark Arts package. I'll dryly observe that I knew you downvoted him - how do you think I knew that you downvoted him? It's not like downvotes come with names attached. Yes, I can "read your mind", which is to say, I read the -massive- amounts of connotation information associated with otherwise bland text.

Can you please provide evidence of what I am thinking and how I am voting?

You, uh, admitted to it? "I thought his comments were not worth attention"

Replies from: Evan_Gaensbauer, Gleb_Tsipursky
comment by Evan_Gaensbauer · 2015-11-22T03:24:02.658Z · LW(p) · GW(p)

If it helps to know, extra downvotes your getting specifically in this thread, but not other ones, are coming from me. This comment isn't meant as a glib statement to signal my affective disapproval. I just think this conversation is going nowhere, and think the quality of dialogue is getting worse. I'm downvoting these comments as I would others. I'm commenting so you know why I'm downvoting, and don't cast aspersions at other users.

Replies from: Gleb_Tsipursky, OrphanWilde
comment by Gleb_Tsipursky · 2015-11-23T07:12:52.768Z · LW(p) · GW(p)

Thanks for letting me know!

comment by OrphanWilde · 2015-11-23T14:03:02.924Z · LW(p) · GW(p)

I can generally tell where downvotes are coming from. The aspersion I threw at Gleb is that he is, as far as I can tell, using sockpuppet accounts to upvote his own stuff (when he thinks it's important). Complaining about my own downvotes would be petty and counterproductive. (Particularly since my total karma score remains unchanged from the beginning of this debacle. It was up 100 at one point, but dropped back down.)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T04:44:45.904Z · LW(p) · GW(p)

I understand you use posturing and accusations without evidence as part of your Dark Arts arsenal, and accept that. I doubt anyone will come across this comment, since it's so far down and the thread is not new. Just wanted you to know personally, in case you aren't simply posturing, that I don't use sockpuppets. I have a number of Less Wronger friends who support the cause of spreading rational thinking broadly, and whenever I make significant posts or particularly salient comments, I let them know, so that they can give me optimizing suggestions and feedback.

Since they happen to share many of my views, they sometimes upvote my comments. They generally don't participate actively, and this is a rare exception on the part of Raelifin, as they do not want to be caught in the backlash. So FYI for the future. Feel free to continue making these accusations for Dark Arts purposes if you wish, but I just wanted you to know.

Replies from: gjm, Lumifer
comment by gjm · 2015-11-25T14:26:02.795Z · LW(p) · GW(p)

The term for extreme versions of this is "meatpuppet". Of course having friends is not the same thing as having meatpuppets, and I have no way of knowing to what extent your friends are LW participants who just happen to be your friends and would have upvoted your articles anyway, and to what extent they're people who come here only to upvote your articles for you. The nearer the latter, the more meatpuppety.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T20:33:52.690Z · LW(p) · GW(p)

Well, didn't expect other people besides myself and OrphanWilde to still be reading this thread, updating on that.

The people who I'm talking about are LW participants, I wouldn't ask them to give me feedback and advice on my writing and engagement otherwise, what's the point of doing so? To be clear, far from all of them upvote my comments, as they don't agree with everything I write, of course. And my point in bringing their attention to it is for me to improve my communication, and also help myself update. It's harder to update when things are said in a hostile way by faceless LW commenters, but my friends can provide me with a trusted external perspective on things. They do tend to agree with most stuff, sometimes choose to upvote, and rarely comment, for the reasons stated above.

I'm sharing all of this for the sake of transparency, as this is a strong value I hold. Not something I had to share, and I know it arouses suspicions, but this is my choice due to my personal value system.

comment by Lumifer · 2015-11-25T05:14:42.790Z · LW(p) · GW(p)

whenever I make significant posts or particularly salient comments, I let them know

LOL. So no sockpuppets but a cheerleading squad on call..? X-)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T20:34:09.798Z · LW(p) · GW(p)

I responded about this above.

comment by Gleb_Tsipursky · 2015-11-19T18:15:27.210Z · LW(p) · GW(p)

As I stated above, I am committed to transparency and openness, which is why I acknowledged downvoting the comments from Vaniver.

Your lie was the following:

You have consistently downvoted everyone who criticized you

I have specific evidence that I actually upvoted some people who made statements not friendly to me when I thought they made good points worthy of public consideration. Please avoid lying in the future. It really harms your reputation.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T18:45:06.222Z · LW(p) · GW(p)

I don't have a reputation to protect, or at least not a terribly positive one (indeed, I dislike having a positive reputation, because it makes it costly to abandon it). I do believe I previously advised you on the benefits and drawbacks of that.

comment by Gleb_Tsipursky · 2015-11-19T07:34:13.145Z · LW(p) · GW(p)

And now I see someone went through and downvoted all of my previous posts and comments. I suppose it wasn't you, because your mastery of Dark Arts would enable you to find more creative ways of harming InIn and me.

Or who knows, maybe it was you. Hard to tell at this point. Not very familiar at all with these sorts of underhanded strategies and mind games and deliberate efforts to attack my reputation on Less Wrong, as you clearly describe here is your intention.

Replies from: ChristianKl, OrphanWilde, OrphanWilde
comment by ChristianKl · 2015-11-19T20:57:58.417Z · LW(p) · GW(p)

Or who knows, maybe it was you. Hard to tell at this point. Not very familiar at all with these sorts of underhanded strategies and mind games and deliberate efforts to attack my reputation on Less Wrong, as you clearly describe here is your intention.

This paragraph feels passive-aggressive to me. If you think you got mass downvoted tell Nancy and she can ask Trike Apps for the perpetrator.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T22:40:55.593Z · LW(p) · GW(p)

Oh, thanks for letting me know about this option, appreciate it!

I wrote that last part when I was in a state of frustration due to the downvoting, so I wasn't being as conscious about my writing as I usually am.

comment by OrphanWilde · 2015-11-19T15:10:48.508Z · LW(p) · GW(p)

You realize that when you edit your comments, an asterisk shows up? Because I tire of your rather boring and predictable approach to Dark Arts, I'll go ahead and head this one off: I didn't downvote all of your previous posts and comments, I downvoted those here, in this post, where you were spamming. Additionally, a lazy look through your profile turns up several posts and comments which, quite definitively, were not downvoted. If your upvotes have taken a shock, it's because of your behavior, not a downvote bot.

http://lesswrong.com/lw/mvw/improving_the_effectiveness_of_effective_altruism/ for one example of a non-downvoted post from the first page of your posts. Given your rather blatant use of sockpuppets (or people from your organization told to upvote your posts/comments, which is the same thing as far as I'm concerned) for manipulating voting, the fact that it may get downvoted after I post this should not be taken as evidence by the audience of anything.

How do I know you're using sockpuppets or human analogues? Because your upvotes are consistent across a given timeframe without regard to comment quality. Which is also why you noticed the fact that some of your 4-upvote comments were downvoted, because you worked to get them to 4.

ETA: See the asterisk?

Also, any administrators are welcome to check my upvote/downvote history. If necessary I'll provide my password to an administrator to verify. (I'm not worried about being locked out of my account, because I can easily accumulate more upvotes, and anybody who dislikes me enough to want to do that probably wouldn't desire me to lose my heard-earned reputation for being an annoying blowhard.)

Replies from: ChristianKl, Gleb_Tsipursky
comment by ChristianKl · 2015-11-19T20:58:38.924Z · LW(p) · GW(p)

You realize that when you edit your comments, an asterisk shows up?

In cases like that, cite the post you are replying to.

comment by Gleb_Tsipursky · 2015-11-19T18:23:23.298Z · LW(p) · GW(p)

Glad to see that whoever downvoted my previous submissions and comments happened to miss one, nice to know that. However, when my karma suddenly starts going down rapidly and goes down by a 100+ points, it generally indicates a downvoting wave.

Yup, I know that an asterisk shows up when I make edits. Thus, when I make any major edits, or when someone already responded to my comment, I add an EDIT note to them. When I make minor adjustments for grammar/spelling/phrasing, and when someone did not yet respond to my comment, I do not.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T18:47:53.595Z · LW(p) · GW(p)

Glad to see that whoever downvoted my previous submissions and comments happened to miss one, nice to know that.

Really.

comment by OrphanWilde · 2015-11-19T13:36:08.227Z · LW(p) · GW(p)

I did downvote some of them, but not, as a rule, those which engendered or contributed to the conversation.

And you apparently don't understand what I was "clearly" describing there, so allow me to be clear: I used an openly hostile attack because it meant the weight of evidence was on -your- side. All you had to do to have no damage to your reputation was to say nothing at all, and my tirade would have been taken as unfair and hostile. Instead, you doubled down on exactly the wrong behaviors.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T18:00:40.547Z · LW(p) · GW(p)

Ah, another lie from you combined with a masterful use of Dark Arts. You state here that:

I used an openly hostile attack because it meant the weight of evidence was on -your- side

However, in your previous comments, you clearly acknowledged that you used an openly hostile attack because it meant that the weight of evidence would be on the side of the person making the attack - you. Indeed, using an openly hostile attack anchors the "weight of public opinion" in your own words, on the side of the one making the attack. It primes the reader to agree, as we intuitively emotionally agree with the things we first come across, and have to use System 2 to force ourselves to disagree. So please avoid lying in the future. You're really not doing yourself any favors by your blatant lies.

comment by OrphanWilde · 2015-11-18T17:51:36.504Z · LW(p) · GW(p)

I'll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By "kind of" I mean "have no idea what you're talking about but are smarter than marketers and it can't be nearly that complex so you're going to talk about it anyways".

Clickbait has come up a few times. The problem is that that isn't marketing, at least not in the sense that people here seem to think. If you're all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.

GEICO has good marketing, which doesn't sell you on their product at all. Indeed, the most prominent "marketing" element of their marketing - the "Saves you 15% or more" bit - mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don't get noticed as marketing, indeed don't get noticed at all.

The issue with this entire conversation is that everybody seems to think marketing is noticed, and uses the examples they notice as examples of good marketing. Those are -terrible- examples, as demonstrated by the fact that you think of them when you think of marketing - and anybody you market to will, too. And then you justify these examples of marketing by relying on an unrealistically low opinion of average people - which many average people share.

Do you think somebody clicking on a "One Weird Trick" tries it out? No, they click on clickbait to see what it says, then move on, which is exactly its goal - be attractive enough to get someone's attention, entertaining enough to keep them interested, and no more. Clickbait doesn't impart anything - its goal isn't to be remembered or to change minds or to sell anything except itself, because its goal is to serve up ads to a steady stream of readers.

And if you click on Clickbait to see what stupid people are being tricked into believing - guess what, you're the "stupid person". You were the target audience, which is anybody they can get to click on their stuff, for any reason at all. The author of "This One Weird Trick" doesn't want to convince you to use it, they want you to add a little bit of traffic to the site, and if they can do that by crafting an article and headline that makes intelligent people want to click to see what gullible morons will buy into, they'll do it.

Clickbait isn't the answer. "Rationalist's One Weird Trick To a Happy Life" isn't the answer - indeed, it's the opposite of the answer, because it's deliberately setting rationality up as a sideshow to sell tickets to so people can laugh at what gullible morons buy into.

Replies from: Viliam, Gleb_Tsipursky, bogus
comment by Viliam · 2015-11-18T21:38:59.796Z · LW(p) · GW(p)

Not sure if it makes any difference, but instead of "stupid people" I think about people reading articles about 'life hacking' as "people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice"; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.

If I'd believe the readers were literally stupid, then of course I wouldn't see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people... uhm... like I used to be before I found LW.

Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: "Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!" And that was my introduction to the rationalist community; these days I regularly attend LW meetups.

Could Gleb's articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don't see a reason why not.

Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.

Replies from: Gleb_Tsipursky, Lumifer, OrphanWilde
comment by Gleb_Tsipursky · 2015-11-19T00:32:46.262Z · LW(p) · GW(p)

Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.

The question is what will people do: will they actually follow the links to get more deep engagement? Let's take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.

Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.

comment by Lumifer · 2015-11-18T22:18:08.606Z · LW(p) · GW(p)

Could Gleb's articles provide the same gateway for someone else

Not that I belong to his target demographic, but his articles would make me cringe and rapidly run in the other direction.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:23:17.165Z · LW(p) · GW(p)

They are intended to not appeal to you, and that's the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.

Replies from: Lumifer
comment by Lumifer · 2015-11-19T04:50:11.428Z · LW(p) · GW(p)

If something feels cognitively easy to you and does not make you cringe at how low-level it is

I don't cringe at the level. I cringe at the slimy feel and the strong smell of snake oil.

Replies from: Tem42, Gleb_Tsipursky
comment by Tem42 · 2015-11-20T01:28:10.160Z · LW(p) · GW(p)

It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me.

Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.

Replies from: Gleb_Tsipursky, Lumifer
comment by Gleb_Tsipursky · 2015-11-20T04:38:12.768Z · LW(p) · GW(p)

To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself.

Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.

comment by Lumifer · 2015-11-20T16:44:20.244Z · LW(p) · GW(p)

It might be useful to identify what exactly trips your snake-oil sensors here.

The overwhelming stench trips them.

This stuff can't be edited to make it better, it can only be dumped and completely rewritten from scratch. Fisking it is useless.

comment by Gleb_Tsipursky · 2015-11-19T05:04:56.362Z · LW(p) · GW(p)

Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy.

However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.

Replies from: Lumifer
comment by Lumifer · 2015-11-19T16:51:43.015Z · LW(p) · GW(p)

I cringed at that when I was learning how to write that way, too.

So, why do you think this is necessary? Do you believe that proles have an unyielding "tits or GTFO" mindset so you have to provide tits in order to be heard? That ideas won't go down their throat unless liberally coated in slime?

It may look to you like you're raising the waterline, but from the outside it looks like all you're doing is contributing to the shit tsunami.

for fear of this kind of backlash

I think "revulsion" is a better word.

Wasn't there a Russian intellectual fad, around the end of XIX century, about "going to the people" and "becoming of the people" and "teaching the people"? I don't think it ended well.

are worth the rewards of raising the sanity waterline

How do you know? What do you measure that tells you you are actually raising the sanity waterline?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:49:20.176Z · LW(p) · GW(p)

Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that's less shitty than what people are used to consuming, and then slowly build them up. That's the purpose of Intentional Insights - to reach out and build people up to growing more rational over time. You don't have to be the one doing it, of course. I'm doing it. Others are doing it. But do you think it's better to improve the shit tsunami or put our hands in our ears and pretend it's not there and not do anything about it? I think it's better to improve the shit tsunami of Lifehack and other such sites.

The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.

Replies from: Lumifer
comment by Lumifer · 2015-11-20T16:41:53.005Z · LW(p) · GW(p)

Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that's less shitty than what people are used to consuming, and then slowly build them up.

Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit.

Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of.

That's the purpose ... it's better to improve the shit tsunami

The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it.

The measures we use

You measure, basically, impressions -- clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline.

So I repeat: how do you know?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T01:10:30.618Z · LW(p) · GW(p)

the stuff you provide is not less shitty. It is exactly what the tsunami consists of

Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I'm willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!

Replies from: Lumifer
comment by Lumifer · 2015-11-23T15:48:34.669Z · LW(p) · GW(p)

I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level.

As to the bet, please specify what is a "neutral reasonable" observer and how do you define "quality" in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y'know...

Replies from: gjm, Gleb_Tsipursky
comment by gjm · 2015-11-23T20:45:01.947Z · LW(p) · GW(p)

That implies you believe the probability you will lose is just under 50%

Only if $1000 is an insignificant fraction of Gleb's wealth, or his utility-from-dollars function doesn't show the sort of decreasing marginal returns most people's do.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T21:56:18.971Z · LW(p) · GW(p)

Indeed, $1000 is a quite significant portion of my wealth.

comment by Gleb_Tsipursky · 2015-11-24T21:55:56.659Z · LW(p) · GW(p)

$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it.

We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere.

We can have gjm or another external observer recruit people just in case one of us doing it might bias the results.

So, going through with it?

Replies from: Lumifer
comment by Lumifer · 2015-11-24T22:01:29.704Z · LW(p) · GW(p)

Sorry, I don't enjoy gambling. I am still curious about "quality" which you say your article has and the typical Lifehacker swill doesn't. How do you define that "quality"?

Replies from: Gleb_Tsipursky, Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T22:30:54.741Z · LW(p) · GW(p)

As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.

comment by Gleb_Tsipursky · 2015-11-25T02:22:41.547Z · LW(p) · GW(p)

And I imagine that based on your response, you take your words back. Thanks!

Replies from: Lumifer
comment by Lumifer · 2015-11-25T03:19:12.689Z · LW(p) · GW(p)

I am sorry to disappoint you. I do not.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T03:20:00.102Z · LW(p) · GW(p)

Well, what kind of odds would you give me to take the bet?

Replies from: Lumifer
comment by Lumifer · 2015-11-25T03:23:48.006Z · LW(p) · GW(p)

As I said, I'm not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother?

Four out of five dentists recommend... X-)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T03:26:01.641Z · LW(p) · GW(p)

Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That's the project of raising the sanity waterline.

Replies from: Lumifer
comment by Lumifer · 2015-11-25T03:37:59.897Z · LW(p) · GW(p)

Now, how do you cross the inference gap from people who like the darkest shade into lighter shades?

I am not interested in crossing the inference gap to people who like the darkest shade. They can have it.

That's the project of raising the sanity waterline.

I don't think that raising the sanity waterline involves producting shit, even of particular colours.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T04:01:51.017Z · LW(p) · GW(p)

You seem to have made two contradicting statements, or maybe we're miscommunicating.

1) Do you believe that raising the sanity waterline of those in the murk - those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving - is still raising the sanity waterline?

2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?

Replies from: Lumifer
comment by Lumifer · 2015-11-25T04:10:45.896Z · LW(p) · GW(p)

Do you believe that raising the sanity waterline of those in the murk

I don't think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you're deluding yourself.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-25T04:13:26.719Z · LW(p) · GW(p)

Ok, I will agree to disagree on this one.

comment by OrphanWilde · 2015-11-18T21:46:20.754Z · LW(p) · GW(p)

Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you're trading short-term gains for long-term difficulties overcoming that reputation (which, I'll note, the rationalist community already struggles with).

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:42:34.500Z · LW(p) · GW(p)

Please avoid using terms like "poisoning" and other vague claims. It's an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!

comment by Gleb_Tsipursky · 2015-11-19T00:20:33.392Z · LW(p) · GW(p)

Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Don't use argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, to attack Intentional Insights through pattern-matching and making vague claims.

Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, are problematic. Thanks!

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T04:38:56.966Z · LW(p) · GW(p)

If I wanted to -attack- you, I'd have accused you of using credit card information from a donation or t-shirt purchase to make illicit purchases. I'd send off for your IRS expense reports to see where your budget goes, and spin that in a very unfriendly way (if any spin were necessary). I'd start spreading rumors that your polyamory posts from your early days were proof that you were sleeping with your students. And trust me, you've spread enough nonsense around Less Wrong to make each one of these accusations stick in a very uncomfortable way. I did the research, trying to decide if you were legitimate or not.

I'd -destroy- you. And your regular and completely uneducated attempts at the Dark Arts would make it -absurdly- easy.

But I'm not even talking about you here. This is me talking about marketing generally.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T06:47:02.005Z · LW(p) · GW(p)

I will save you the trouble of sending off to the IRS for the Intentional Insights expense reports. We are committed to transparency, and list our financials on our "About Us" page. I cannot control what you do with that information - it's your choice.

My purpose for revealing it is my goal of being open. I know that doing so makes me vulnerable to the kind of destruction you describe above. It's easy enough to frame me with fake screenshots doctored with Adobe Photoshop or other forms of framing. And then who can tell what's real, right? The accusation would be out there, I would have to defend myself, and then people who don't know me would suspect things. I would never be able to throw off the taint of it, would I? The same would be the case with rumors, etc.

Very clever and strategic Dark Arts stuff. Never thought about any of these until you raised them. I know you are an expert Dark Arts practitioner as you showed here in your deliberate efforts to attack my reputation on Less Wrong, as you clearly describe here. Didn't know how expert you were. Updating on how much of a danger it is to me personally for you to be this upset with what I'm trying to do by getting more people out there in the world to be more sane.

I also noticed you chose not to respond to the point I made about the article. I would encourage you to be clear and specific. Thanks!

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T21:36:11.631Z · LW(p) · GW(p)

Updating on how much of a danger it is to me personally for you to be this upset with what I'm trying to do by getting more people out there in the world to be more sane.

I'm not upset with you. I'm at worst irritated, and that's entirely because your style bothers me on a visceral level, and honestly, the amusement factor usually makes up for it.

The common element of all of those things is that they're things I suspect or have suspected might be true of you, because of the way you behave - and by using the various materials that created those suspicions as "evidence" for them, the rumors are [ETA: Could be, rather] made to sound disproportionately valid. (Something you've said elevates hypothesis to "Extremely weak" plausibility levels for me; I suggest the hypothesis, elevating it to "Extremely weak" plausibility levels in others, then, after that update is made, separately present circumstantial evidence, causing them to elevate further. Double-counting evidence, basically.)

In the end, however, I assign very low probabilities to any of them (which is to say, I don't believe them), and I think you're a muggle pretending to be a Dark Lord, with just enough success at the pretense to achieve the effect of making my skin crawl, and probably benefitting from it on a personal level because it's a step up from your previous level of social expertise. And at any rate, I wouldn't actually unleash any such attacks, regardless of how antagonistic I felt towards you, unless I actually thought they were true.

You may notice a tendency about my use of Dark Arts: I try to always be clear about when I'm using them and what I expect them to do, if not while I'm doing it, after the fact. I'm not a fan of them, because I think that they have negative-on-average payouts. Which I suspect you'd disagree with, for the aforementioned reason that I suspect your social skills aren't terribly good, and you're experiencing more success using them. If this is the case: So far, you're relying on luck. As I hope I've demonstrated, a single suspicion produced by their use could do far more than erase the positive benefits you may have accrued so far.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:42:46.456Z · LW(p) · GW(p)

I actually do not consider myself a practitioner of Dark Arts as they are traditionally understood.

I feel pretty icky even about the "light Dark Arts" marketing stuff I am doing. As I told Lumifer in an earlier post, I cringed at that feeling when I was learning how to write for Lifehack, Huffington Post, etc. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. The only reason I am choosing to do so is to reach the goal of raising the sanity waterline effectively.

After bringing this question to the Less Wrong community in an earlier post, I updated to not think of what I do as in real Dark Arts territory. If I considered it to be in real Dark Arts territory, I don't think I could bring myself emotionally to do it, at least without much more serious self-modification.

comment by bogus · 2015-11-18T20:15:26.679Z · LW(p) · GW(p)

utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don't get noticed as marketing, indeed don't get noticed at all.

That's a good strategy when you have GEICO's name recognition. If you don't, maybe getting noticed isn't such a bad thing. And maybe "One Weird Trick" is a gimmick, but then so is GEICO's caveman series - which is also associated with a stereotype of someone being stupid. Does the gimmick really matter once folks have clicked on your stuff and want to see what it's about? That's your chance to build some positive name recognition.

Replies from: Gleb_Tsipursky, Lumifer, OrphanWilde
comment by Gleb_Tsipursky · 2015-11-19T00:38:51.151Z · LW(p) · GW(p)

Just wanted to clarify that people who are reading Lifehack are very much used to the kind of material there - it's cognitively easy for them and they don't perceive it as a gimmick. So their first impression of rationality is not as a gimmick but as something that they might be interested in. After that, they don't go to the Less Wrong website, but to the Intentional Insights website. There, they get more high-level material that slowly takes them up the level of complexity. Only some choose to go up this ladder, and most do not. Then, after they are sufficiently advanced, we introduce them to more complex content on ClearerThinking, CFAR, and LW itself. This is to avoid the problem of Endless September and other challenges. More about our strategy is in my comment.

comment by Lumifer · 2015-11-18T20:56:09.882Z · LW(p) · GW(p)

Does the gimmick really matter once folks have clicked on your stuff

The useful words are "first impression" and "anchoring".

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:43:26.751Z · LW(p) · GW(p)

I answered this point below, so I don't want to retype my comment, but just FYI.

comment by OrphanWilde · 2015-11-18T20:36:18.050Z · LW(p) · GW(p)

That's a good strategy when you have GEICO's name recognition.

How many people had heard of the Government Employee's Insurance Company prior to that advertising campaign? The important part of "GEICO can save you 15% or more on car insurance" is repeating the name. They started with a Gecko so they could repeat their name at you, over and over, in a way that wasn't tiring. It was, bluntly, a genius advertising campaign.

If you don't, maybe getting noticed isn't such a bad thing.

Your goal isn't to get noticed, your goal is to become familiar.

And maybe "One Weird Trick" is a gimmick, but then so is GEICO's caveman series - which is also associated with a stereotype of someone being stupid.

You don't notice any other elements to the caveman series? You don't notice the fact that the caveman isn't stupid? That the commercials are a mockery of their own insensitivity? That the series about a picked-upon identity suffering from a stereotype was so insanely popular that a commercial nearly spawned its own TV show?

Does the gimmick really matter once folks have clicked on your stuff and want to see what it's about? That's your chance to build some positive name recognition.

Yes, the gimmick matters. The gimmick determines people's attitude coming in. Are they coming to laugh and mock you, or to see what you have to say? And if you don't have the social competency to develop their as-yet-unformed attitude coming in, you sure as hell don't have the social competency to take control of it once they've already committed to how they see you.

Which is to say: Yes. First impressions matter.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:43:56.232Z · LW(p) · GW(p)

I answered this point earlier in this thread, so I don't want to retype my comment, but just FYI.

comment by moridinamael · 2015-11-18T15:30:08.281Z · LW(p) · GW(p)

/writes post

/reads link to The Virtue of Silence

/deletes post

comment by Gleb_Tsipursky · 2015-11-25T02:41:44.108Z · LW(p) · GW(p)

My overall updating from this thread has been:

Learning a lot more about the diversity of opinions and concerns among Less Wrongers.

  • 1) Learning that there are a lot more risk-averse people on LW who are opposed to experimenting with new things, learning from experience, improving going forward, and optimizing the world, than I had previously thought.

  • 2) Learned a lot about Less Wrongers' "ew" experiences and flinching away from [modern marketing], despite some getting it

  • 3) Learned that many Less Wrongers are strongly oriented toward perfectionism and bulletproof arguments at the expense of clarity and bridging inference gaps.
  • 4) Surprised to see positive updates on my character (1, 2)as the result of this discussion, and will pay more attention to issues of character in the future - I think I paid too much attention to content previously and insufficient attention to character.

Updated toward some different strategies with Intentional Insights

  • 1) Orienting Intentional Insights content more toward providing breadcrumbs of links toward more higher-quality materials than the people on Lifehack and The Huffington Post are currently reading
  • 2) Teaching our audience about the dangers of overconfidence sooner.
  • 3) Taking more concrete steps to minimize the risk of Endless September and tainting the term "rationality" by decreasing mentions of Less Wrong and rationality in our content.
  • 4) Being more clear and specific in communicating scientific thinking to our audiences.
  • 5) Learned more about The Virtue of Silence and need to keep this virtue in mind.
  • 6) Learned to consider more the trade-offs of using and simplifying certain terms and concepts
  • 7) Updating more toward taking well-considered action despite opposition, and avoiding falling into status-quo bias and information bias.
  • 8) Stopping unproductive conversations sooner
  • 9) Overall, I need to focus more on striving to learn things even from highly negative feedback, and avoid the instinct to flinch away or swing back. This is my aspiration, and I did not always succeed in the course of this discussion. However, I believe this experience will help me grow stronger in this domain.

Thanks all for your participation. As you see, you all taught me something. I appreciate you revealing your mental maps to the extent you chose to do so, and now my territory is clearer. My gratitude to you.

EDIT: Edited for formatting, the bullet points did not come out right away.

comment by Raelifin · 2015-11-23T15:28:27.334Z · LW(p) · GW(p)

Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.

So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)

Here's my take on it: I don't know. Like query, I don't pretend to be confident one way or the other. I'm not as scared of "horrific long-term negative impact", however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.

Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn't even use the "rationality" label. I'd argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).

In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn't actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I'm not a strong believer that it will succeed. What I do think, though, is that there's enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren't confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.

* - Okay, it's not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P

(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)

Replies from: Vaniver, Lumifer
comment by Vaniver · 2015-11-23T23:12:53.942Z · LW(p) · GW(p)

I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist.

This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you?

(My impression is that "a word to the wise is sufficient," and so most clever people find it aggravating when someone expounds on simple principles for hundreds of pages, because of the implication that they didn't get it the first time around. Or they assume it's less principled than it is.)

Replies from: Raelifin
comment by Raelifin · 2015-11-24T22:02:03.907Z · LW(p) · GW(p)

I picked 7 Habits because it's pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.

Replies from: Vaniver
comment by Vaniver · 2015-11-25T04:24:36.782Z · LW(p) · GW(p)

I picked 7 Habits because it's pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.

I suspect the point will be clearer if stated without examples? I think you're pointing towards something like "most self-help does not materially improve the lives of most self-help readers," which seems fairly ambiguous to me. Most self-help, if measured by titles, is probably terrible simply by Sturgeon's Law. But is most self-help, as measured by sales? I haven't looked at sales figures, but I imagine it's not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.

It also seems to me that the information content of useful self-help is about pointing to places where applying effort will improve outcomes. (Every one of the 7 Habits is effortful!) Part of scientific self-help is getting an accurate handle on how much improvement in outcomes comes from expenditure of effort for various techniques / determining narrowly specialized versions.

But if someone doesn't actually expend the effort, the knowledge of how they could have doesn't lead to any improvements in outcomes. Which is why the other arm of self-help is all about motivation / the emotional content.

It's not clear to me that LW-style rationality improves on the informational or emotional content of self-help for most of the populace. (I think it's better at the emotional content mostly for people in the LW-sphere.) Most of the content of LW-style rationality is philosophical, which is very indirectly related to self-help.

Replies from: Richard_Kennaway, ChristianKl, Richard_Kennaway
comment by Richard_Kennaway · 2015-11-25T09:37:00.263Z · LW(p) · GW(p)

Most self-help, if measured by titles, is probably terrible simply by Sturgeon's Law. But is most self-help, as measured by sales? I haven't looked at sales figures, but I imagine it's not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.

Another complication is that Sturgeon's Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. (Gated link, may not be accessible to all.) "When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” "Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end."

Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?

Replies from: Vaniver
comment by Vaniver · 2015-11-25T14:56:40.148Z · LW(p) · GW(p)

Another complication is that Sturgeon's Law applies as much to the readers.

Agreed; that's where I was going with my paragraph 3 but decided to emphasize it less.

comment by ChristianKl · 2015-11-25T10:52:11.200Z · LW(p) · GW(p)

But is most self-help, as measured by sales? I haven't looked at sales figures, but I imagine it's not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.

"genuinely helpful" is a complicated term. A lot of books bring people to shift their attention to different priorities and get better at one thing while sacrificing other things.

New Agey literature about being in the moment has advantages but it can also hold people back from more long-term thinking.

comment by Richard_Kennaway · 2015-11-25T09:35:37.714Z · LW(p) · GW(p)

Most self-help, if measured by titles, is probably terrible simply by Sturgeon's Law. But is most self-help, as measured by sales? I haven't looked at sales figures, but I imagine it's not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.

Another complication is that Sturgeon's Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. "When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” "Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end."

Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?

comment by Lumifer · 2015-11-23T15:59:40.921Z · LW(p) · GW(p)

the goal of raising the sanity waterline is a good one, and rationalists should support the attempt

That does not follow at all.

The road to hell is in excellent condition and has no need of maintenance. Having a good goal in no way guarantees that what you do has net benefit and should be supported.

Replies from: Raelifin
comment by Raelifin · 2015-11-23T17:27:49.543Z · LW(p) · GW(p)

I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn't likely to cause harm. Given that it isn't likely to hurt, and it might help, I think it makes sense to support in general.

(To be clear: Just because something is a net positive (in expectation) clearly doesn't imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)

Replies from: Lumifer
comment by Lumifer · 2015-11-23T17:46:27.449Z · LW(p) · GW(p)

a situation where failure isn't likely to cause harm. Given that it isn't likely to hurt, and it might help, I think it makes sense to support in general.

A failure isn't likely to cause major harm, but by similar reasoning success is not likely to lead to major benefits as well. In simpler terms, InIn isn't likely to have a large impact of any kind. Given this, I still see no reason why minor benefits are more likely than minor harm.

comment by Vaniver · 2015-11-18T22:29:41.579Z · LW(p) · GW(p)

The short version of my reaction is that when it comes to PR, it's better to be right than to be quick.

I expect II's effect is small, but seems more likely to be negative than positive.

Replies from: MrMind, Gleb_Tsipursky
comment by MrMind · 2015-11-19T09:05:00.523Z · LW(p) · GW(p)

The short version of my reaction is that when it comes to PR, it's better to be right than to be quick.

I don't understand how this could possibly be true, for any common notion of "better". Imagine that British Petroleum responded only now to the leak disaster with a long, detailed technical report about how it was not their fault.
The public opinion would have already set catastrophically against them.

Replies from: Vaniver, Elo
comment by Vaniver · 2015-11-19T16:01:05.222Z · LW(p) · GW(p)

Movement-building is categorically different from incident response. I agree that transparency and speed are the important features when it comes to incidents.

But when it comes to movement building, it seems like a bad first impression is difficult to remove, and questions of where to draw new adherents from has a significant impact on the community quality. One also has to deal with the fact that many people's identities are defined at least in part negatively--if something is a thing that those people like, then they'll dislike it because of the anticorrelation of their preferences with their enemies.

comment by Elo · 2015-11-19T10:21:18.927Z · LW(p) · GW(p)

The short version of

I think you might be treating the premise given in an uncharitable way.

If however you suggest that, more frequently than not - "it's better to be quick than to be right" (the opposite opinion) - then carry on.

Which is it?

comment by Gleb_Tsipursky · 2015-11-19T00:45:12.192Z · LW(p) · GW(p)

I'm curious why you think the effect will be small and negative. Here is the broad strategy that we have. I'd like your thoughts on it and how to optimize it - always looking for better ways to do things.

comment by Lumifer · 2015-11-18T15:51:22.240Z · LW(p) · GW(p)

Do you believe that the "one weird trick to effortlessly lose fat" articles promote healthy eating and are likely to lead people to approach nutrition scientifically?

Replies from: MrMind, Gleb_Tsipursky, bogus
comment by MrMind · 2015-11-19T09:15:04.241Z · LW(p) · GW(p)

Beware of other-modeling!

Average Lumifer is most definitely not a good model of average person. Does "one weird trick" promotes improvement? I don't know, but I do know that your gut reaction is not a good model for the answer.

Replies from: Lumifer
comment by Lumifer · 2015-11-19T16:59:43.435Z · LW(p) · GW(p)

Average Lumifer is most definitely not a good model of average person

Oh, boy, am I not :-D

I do know some "more average" people, though, and they don't seem to be that easily taken by cheap tricks, at least after the first dozen times :-/ And as OrphanWilde pointed out, the aim of clickbait is not to convince you of anything, it is solely to generate the ad impressions.

I would surprised if "one weird trick" diets promoted any improvement, in part because most any diet requires some willpower and the willingness to stick with it for a while -- and the weird tricks are firmly aimed at people who have, on a good day, the attention span of a goldfish...

comment by Gleb_Tsipursky · 2015-11-19T00:59:13.888Z · LW(p) · GW(p)

Yes, if the "one weird trick" is a science-based approach, such as "be intentional about your diet and follow scientific guidelines," and leads people to other science-based strategies. Here's how I did it in this article. Do you think the first "weird trick" will not result in people having greater mental strength?

comment by bogus · 2015-11-18T15:54:47.584Z · LW(p) · GW(p)

If you think that the Shangri-La diet "promotes healthy eating" and is scientifically-based, what's wrong with promoting it as 'one weird trick to effortlessly lose fat'? It has the latter as an express goal, and is certainly, erm, weird enough.

Replies from: Lumifer
comment by Lumifer · 2015-11-18T16:14:24.239Z · LW(p) · GW(p)

what's wrong with promoting it as 'one weird trick to effortlessly lose fat'?

What's wrong is that you are reinforcing the "grab the shiniest thing which promises you the most" mentality and as soon as the Stuff-Your-Face-With-Cookies diet promises you losing fat TWICE AS FAST!!eleven! the Shangri-La diet will get defenestrated as not good enough.

Replies from: bogus
comment by bogus · 2015-11-18T16:28:24.180Z · LW(p) · GW(p)

the Shangri-La diet will get defenestrated as not good enough.

See, the difference is that the Shangri-La diet has some scientific backing, which the Stuff-Your-Face-With-Cookies diet conspicuously lacks. So, the former will win in any real contest, at least among people who are sufficiently rationally-minded[1]. Except that it won't, if you can't promote your message effectively. This is where your initial pitch matters.

[1] (People who aren't rationally-minded won't care about 'rationality', of course, so there's little hope for them anyway.)

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T19:38:02.683Z · LW(p) · GW(p)

Shangri-La diet has some scientific backing

I do believe that it works, but "scientific backing"? Did I miss some new study on the Shangri-La diet, or what are you talking about?

Replies from: Vaniver
comment by Vaniver · 2015-11-18T22:11:19.104Z · LW(p) · GW(p)

People often use "scientific backing" to mean "this extrapolates reasonably from evidence" rather than "this has been tested directly."

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T22:49:55.428Z · LW(p) · GW(p)

If you use the word scientific that way I think you lose a quite valuable word. I consider NLP to be extrapolated from evidence. I even have seen it tested directly a variety of times. At the same time I don't consider it to be scientific in the popular usage of 'scientific'.

For discussion on LW I think Keith Stanovich criteria's for science are good:

Three of the most important [criteria of science] are that (1) science employs methods of systematic empiricism; (2) it aims for knowledge that is publicly verifiable; and (3) it seeks problems that are empirically solvable and that yield testable theories.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T01:00:39.807Z · LW(p) · GW(p)

Agreed, good definition of science-backed.

comment by ChristianKl · 2015-11-18T19:08:00.489Z · LW(p) · GW(p)

On the other hand, imagine that you have a magical button, and if you press it, all not-sufficiently-correct-by-LW-standards mentions of rationality (or logic, or science) would disappear from the world.

To me it seems like you conflate the brand of rationality and a body of ideas with rationality as defined in our wiki "Rationality is the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality".

To me it seems that Gleb is picking the low-hanging fruit that most rationalists wouldn't even touch for... let's admit it... status reasons. He talks to the outgroup, using the language of the outgroup.

Stephan Schubert's Face Checking 2.0 is also a rationalist way to speak to the masses. There's nothing that makes me fringe about that Huffington Post article. It's in a language that a broad audience can appreciate. The same goes for other ClearerThinking content.

The medium of quizes is easily accessible to a broad public but more active than reading theory laden articles.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:11:51.618Z · LW(p) · GW(p)

As OrphanWilde correctly pointed out in his post, if something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. You are the target audience for the Fact Checking article. It's written for your level, and that of other rationalists.

That leads me to a broader point. ClearerThinking is a great site! In fact, I just had a great conversation with Spencer Greenberg about collaborating. As he told me, ClearerThinking targets people who are already pretty interested in improving their decision-making, and want to take the time to do quizzes and online courses.

Intentional Insights hits a couple of levels below that. It goes for people who are not aware that the human mind is suboptimal in its decision-making structure, and helps make them aware of it. Then, it gives them easy tools and resources to improve their thinking. After sufficient improvement, we aim to provide them with tools from ClearerThinking. Spencer and I specifically talked about ways we could collaborate together to set up a good channel to send people on to ClearerThinking in an organized and cohesive manner, and we'll be working on setting that up. For more on our strategy, see my comment below.

comment by ChristianKl · 2015-11-18T16:39:35.373Z · LW(p) · GW(p)

Assuming that the articles are not merely ignored (where "ignoring" includes "thousands of people with microscopic attention spans read them and then forget them immediately), the obvious failure mode is people getting wrong ideas, or adopting "rationality" as an attire.

I don't think that a few articles like those will make someone pick up rationality as attire who wasn't already in that area beforehand.

Yes, this whole idea of marketing rationality feels wrong. Marketing is like almost the very opposite of epistemic rationality ("the bottom line" et cetera). On the other hand, any attempt to bring rationality to the masses will inevitably bring some distortion; which hopefully can be fixed later when we already have their attention.

I believe that the most important fact for reaching the masses isn't trying to pander to the masses but to have one one's house in order so that people like to be in the community. The Slack chat and the new SlateStarCodex reddit channel go into that direction.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T00:56:27.179Z · LW(p) · GW(p)

But how will people find out your house is in order unless you reach out to them and tell them? That's the whole point of the Intentional Insights endeavor - to show people how they can have a better life and have their house be more in order through engaging with science-backed rational thinking strategies. In the language of houses, it's the Gryffindor arm of Hufflepuff, reaching out to others and welcoming them into rational thinking.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-19T21:16:45.384Z · LW(p) · GW(p)

But how will people find out your house is in order unless you reach out to them and tell them?

Because happy people talk to their friends about their experiences. Personal recommendations carry a lot more weight than popular mainstream articles.

to show people how they can have a better life and have their house be more in order through engaging with science-backed rational thinking strategies

There are people who believe in scientism and will take a thinking strategy because someone says that it's science-based. That's not what rationality is about. I think part of this community is not simply following the authority but wanting to here the chain of reason why a certain strategy is science-based.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:27:34.214Z · LW(p) · GW(p)

Yes, personal recommendations carry more weight. But mainstream articles have a lot more reach. As I described here, the Lifehack article was viewed by many thousands of people. This is the point of writing for a broad audience. Moreover, as you can see from the discussion you and I had about a previous article, the articles are based on research.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-20T07:24:32.871Z · LW(p) · GW(p)

My comments are based on my experience with doing media interviews that have 2 orders of magnitude more reach.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T06:36:51.612Z · LW(p) · GW(p)

Have you done them on a consistent basis, as I am able to do Lifehack articles every couple of weeks?

I have also just published an article in the Sunday edition of a newspaper, described here, with a paper edition reaching 420K readers and monthly visits of 5 million.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-23T12:28:42.975Z · LW(p) · GW(p)

Have you done them on a consistent basis, as I am able to do Lifehack articles every couple of weeks?

In 2012 I talked to rougly one journalist per month.

I have also just published an article in the Sunday edition of a newspaper, described here, with a paper edition reaching 420K readers and monthly visits of 5 million.

Okay, that's more than thousands.

I think that article. There's no deep analysis what ISIS wants but that's okay for a mainstream publication and recruiting is a factor.

In case you write another article about ISIS, I would recommend as background reading: http://www.theglobeandmail.com/globe-debate/the-strategic-value-of-compassion-welcoming-refugees-is-devastating-to-is/article27373931/ http://www.theatlantic.com/magazine/archive/2015/03/what-isis-really-wants/384980/

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-24T21:32:34.365Z · LW(p) · GW(p)

Cool, thanks for the links, much appreciated!

Separately, I'd be curious about your experience talking to journalists about rationality. Think you can do a discussion post about that?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-24T22:37:23.487Z · LW(p) · GW(p)

My topic was Quantified Self with adjacent but not direct.

comment by signal · 2015-11-18T15:40:50.259Z · LW(p) · GW(p)

Somehow in this context the notion of "picking the low-hanging fruit" keeps coming up. This is prejudgmental and one would have a hard time disagreeing with such an action. Intentional Insights marketing is also discussed on Facebook. I definitely second the thence stated opinion that the suggested T-Shirts and rings are counterproductive and, honestly, ridiculous. Judging the articles is seems more difficult. If the monthly newsletter generates significant readership, this might be useful in the future. However, LW and Rationality FB groups already have their fair share of borderline self-help questions. I would not choose to further push in this direction.

Replies from: Viliam, Gleb_Tsipursky
comment by Viliam · 2015-11-18T22:17:05.984Z · LW(p) · GW(p)

I was also unimpressed by the T-shirts. It's just... I think it's easier to move from "bad shirts" to "good shirts" than from "no shirts" to "good shirts". It's just a different bitmap to print.

(My personal preference about shirts is "less is better". I would like to have a T-shirt only saying "LessWrong.com", and even that with smaller letters, not across the whole body. And preferably not a cheap looking shirt; not being white would probably be a good start.)

Generally, what I would really like is something between the Intentional Insights approach, and what we are doing now. Something between "hey, I'm selling something! look here! look here! gimme your money and I will teach you the secret!" and "uhm, I'm sitting here in the corner, bumbling something silently, please continue to ignore me, we are just a small group of nerds". And no, the difference is not between "taking money" and "not taking money"; CFAR lessons aren't free either.

Seems to me that nerds have the well-known bias of "too much talking, no action". That's not a reason to go exactly the opposite way. It's just... admirable what a single dedicated person can do.

Replies from: Gleb_Tsipursky, Lumifer
comment by Gleb_Tsipursky · 2015-11-19T01:05:44.957Z · LW(p) · GW(p)

Thanks for the positive sentiment about the single dedicated person!

Just FYI, there's much more that Intentional Insights does than the click-bait stuff on Lifehack. We try to cover the whole range between CFAR's targeting of the top 5%, and ClearerThinking's targeting of techy young people in the coastal cities already interested in decision-making (the latter is from my conversations with Spencer Greenberg). We've been heavily orienting toward the skeptic/secular market as a start, and then right now are going into the self-improvement sector and also policy/politics commentary. We offer a wide variety of content, much of it higher-level than the self-improvement articles. I talk more about this topic in my comment about our strategy.

To be clear about taking money, Intentional Insights is a 501(c)(3) nonprofit organization, not a for-profit company. The vast majority of our content is free, and we make our way mainly on donations.

P.S. Will keep in mind your preferences for a shirt. We currently have one that looks a lot like what you describe, here Can you let me know how that looks compared to your ideal?

Replies from: Viliam
comment by Viliam · 2015-11-19T15:25:42.055Z · LW(p) · GW(p)

Can you let me know how that looks compared to your ideal?

Colors: great.
(The grey-brown and pink versions are also okay. I guess any version other than white is okay.)
Font size: still too large.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T18:24:02.373Z · LW(p) · GW(p)

By how many percent smaller would be good?

Replies from: Viliam
comment by Viliam · 2015-11-19T19:54:35.897Z · LW(p) · GW(p)

Two inches high at most. However, feel free to ignore me; I almost surely won't buy the shirt, and other people may have different preferences.

Anyway, this is off-topic, so I won't comment here about the shirts anymore.

comment by Lumifer · 2015-11-18T22:23:40.430Z · LW(p) · GW(p)

I think it's easier to move from "bad shirts" to "good shirts" than from "no shirts" to "good shirts".

I think quite the reverse. Inertia is a thing and bad shirts are "we already have them".

Making some shirts is a low-effort endeavour -- just throw the design at CafePress or Zazzle and you're done.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T01:13:56.506Z · LW(p) · GW(p)

I prefer the experimental approach, of experimenting and then figuring out better ways to do things. This is how the most successful startups work.

Besides, we are doing new t-shirts now based on the feedback. Your thoughts on these two options would be helpful 1 and 2.

Replies from: Lumifer
comment by Lumifer · 2015-11-19T16:44:00.253Z · LW(p) · GW(p)

I prefer the experimental approach, of experimenting and then figuring out better ways to do things.

For this you need a way to measure and assess outcomes. What is the metric that you are using to figure out what's "better"?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:49:56.631Z · LW(p) · GW(p)

Feedback from aspiring rationalists :-)

comment by Gleb_Tsipursky · 2015-11-19T01:33:12.651Z · LW(p) · GW(p)

I hear you about the t-shirts and rings, and we are trying to optimize those. Here are two options of t-shirts we think are better: 1 and 2. What do you think?

Replies from: IlyaShpitser, signal
comment by IlyaShpitser · 2015-11-19T02:10:00.594Z · LW(p) · GW(p)

I find your chutzpah impressive.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T04:41:29.303Z · LW(p) · GW(p)

Thanks, I try to not be knocked down by negative feedback, and instead welcome bad news as good news and optimize :-)

comment by signal · 2015-11-19T14:26:28.233Z · LW(p) · GW(p)

They are, but I still would not wear them. (And no rings for men unless you are married or have been a champion in basketball or wrestling.)

Let's differentiate two cases in whom we may want to address: 1) Aspiring rationalists: That's the easy case. Take an awesome shirt, sneak in "LW" or "pi" somewhere, and try to fly below the radar of anybody who would not like it. A moebius strip might do the same, a drawing of a cat in a box may work but also be misunderstood. 2) The not-yet aspiring rationalist: I assume, this is the main target group of InIns. I consider this way more difficult, because you have to keep the weirdness points below the gain. And you have to convey interest in a difficult-to-grasp concept on a small area. And nerds are still less "cool" than sex, drugs, and sports. A Space X T-Shirt may do the job (rockets are cool), but LW concepts? I haven't seen a convincing solution, but will ask around. Until then, the best solution to me seems to dress as your tribe expects you to find other ways of spreading the knowledge.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:35:24.204Z · LW(p) · GW(p)

1) For actual aspiring rationalists, we do want to encourage those who want to promote rationality to be able to do so through shirts they would enjoy. For example, how does this one strike you.

2) For the not-yet aspiring rationalist, do you think the shirts above, 1 and 2, do the job?

comment by bogus · 2015-11-18T15:32:20.204Z · LW(p) · GW(p)

Trying to teach someone to think rationally is a long process -- maybe even impossible for some people.

This is not incompatible with marketing persay - marketing is about advocacy, not teaching. And pretty much all effective advocacy has to be targeted to System 1 - the "heart" or the "gut" - in fairly direct terms.

To me, it seems that CFAR was supposed to be working on this sort of stuff, and they have not accomplished all that much. So I think, in a way, we should be welcoming the fact that Gleb T./International Insights are now trying to fill this void. Maybe they aren't doing it very well at this time, but that's a separate matter.

Replies from: ChristianKl, Gleb_Tsipursky, Vaniver
comment by ChristianKl · 2015-11-18T16:27:36.911Z · LW(p) · GW(p)

To me, it seems that CFAR was supposed to be working on this sort of stuff, and they have not accomplished all that much.

CFAR mission wasn't marketing but actually teaching people to be more rational in a way that helps them actually be more rational. As far as I understand they are making progress on that goal and the workshops they have now are better than at the beginning.

As rationalists we have responsibility for actually giving useful advice. CFAR's approach of first focusing on figuring out what's useful advice, instead of first focusing on marketing is good.

Replies from: Viliam, bogus, Gleb_Tsipursky
comment by Viliam · 2015-11-18T22:31:39.262Z · LW(p) · GW(p)

CFAR's approach of first focusing on figuring out what's useful advice, instead of first focusing on marketing is good.

Sounds like false dilemma. How about splitting CFAR into two groups? The first group would keep inventing better and better advice (more or less what CFAR is doing now). The second group would take the current results of their research, and try to deliver it to as many people as possible. The second group would also do the marketing. (Actually, the whole current CFAR could continue to be the first group; the only necessary thing would be to cooperate with the second one.)

You should multiply the benefit from the advice by the number of people that will receive the advice.

Yeah, it's not really a linear function. Making one person so super rational that they would build a Friendly AI and save the world may be more useful than teaching thousands of people how to organize their study time better.

But I still suspect that the CFAR approach is to a large degree influenced by "how we expect people in academia to behave".

Replies from: Gleb_Tsipursky, ChristianKl
comment by Gleb_Tsipursky · 2015-11-19T04:32:09.076Z · LW(p) · GW(p)

I actually spoke to Anna Salamon about this, and she shared that CFAR started by trying a broad outreach approach, and found it was not something they could make work. That's when they decided to focus on workshops targeting a select group of social elites who would be able to afford their high-quality, high-priced workshops.

And I really appreciate what CFAR is doing - I'm a monthly donor. I think their targeting of founders, hackers, and other techy social elites is great! They can really improve the world through doing so. I also like their summer camps for super-smart kids, and training for Effective Altruists, too.

However, CFAR is not set up to do mass marketing, as you rightly point out. That's part the reason we set up Intentional Insights in the first place. Anna said she looks forward to learning from what we figure out and collaborating together. Also working with ClearerThinking as well, which I described in my comment here.

comment by ChristianKl · 2015-11-18T23:12:09.079Z · LW(p) · GW(p)

teaching thousands of people how to organize their study time better.

Given the amount of akrasia in this community I'm not sure we are at a point where we have a good basis on lecturing other people about this.

Given the current urge propagtion exercise a lot of people who got it taught in person and who have the CFAR texts can't do it successfully. Iterating on it till it reaches a form that people can take and use would be good.

But I still suspect that the CFAR approach is to a large degree influenced by "how we expect people in academia to behave".

From my understanding CFAR doesn't want to convince academia directly and isn't planning on running any trials themselves at the moment that they will publish.

Actually, the whole current CFAR could continue to be the first group; the only necessary thing would be to cooperate with the second one.

I would appreciate if CFAR would publish their theories publically in writting sooner but I hope the will publish in the next year. I don't have access to the CFAR mailing list and I understand that they do get feedback on their writing via the mailing list at the moment.

The first group would keep inventing better and better advice (more or less what CFAR is doing now).

CFAR very recently renamed implentation intentions into Trigger Action Plans (TAP's). If we already would have marketed implentation intentions widely as vocabulary it would be harder to change the vocabulary.

You should multiply the benefit from the advice by the number of people that will receive the advice.

Landmark reaches quite a lot of people and most of their core ideas aren't written down in small articles. Scientology would be another organisation that tries to do most idea communication in person. It still reached a lot of people.

When doing Quantified Self community building in Germany, the people who came to our meetups mostly didn't came because of mainstream media but other sources. It got to the point of another person telling me that giving media interviews is just for fun and not community building.

comment by bogus · 2015-11-18T16:47:22.999Z · LW(p) · GW(p)

CFAR mission wasn't marketing but actually teaching people to be more rational

And this makes a lot of sense, if you assume that advocacy is completely irrelevant to teaching people to be more rational. ('Marketing' being just another word for advocacy.) But what if both are necessary and helpful? Then it makes sense for someone to work on marketing rationality - either CFAR itself, Intentional Insights or someone else entirely.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T18:31:54.607Z · LW(p) · GW(p)

CFAR certainly need to do some form of marketing to get people to come to it's workshops. As far as I understand CFAR succeeds at the task of doing marketing enough to be able to sell workshops.

As we as a community get better at actually having techniques to make peopel more rational we can step up marketing.

Replies from: bogus
comment by bogus · 2015-11-18T20:37:36.564Z · LW(p) · GW(p)

Well...cracked/buzzfeed-style articles vs. niche workshops. I wonder which strategy has a broader impact?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T23:28:23.129Z · LW(p) · GW(p)

I don't think the cracked/buzzfeed-style articles are going to change much about the day-to-day decision making of the people who read them. CFAR workshop on the other hand do.

But that isn't even the whole story. CFAR manages to learn about what works and what doesn't work through their approach. It allows them to gather empiric data that will in the future also be able to be transmitted through a medium that's less intensive than a workshop.

This rationalist community isn't about New Atheism where they know the truth and the problem is mainly that outsiders don't know the truth and they have to bring the truth to them.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T04:37:44.287Z · LW(p) · GW(p)

Actually, you'd be surprised at what kind of impact can be had through Lifehacker-type articles. Dust specks if sufficiently large in nature are impactful, after all. And the Lifehacker articles reach many thousands.

Moreover, it's a question of continuous engagement. Are people getting engaged with this content more than just passing on to another Lifehack article? We have evidence that they are, as described in my comment here.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-19T21:04:31.680Z · LW(p) · GW(p)

Actually, you'd be surprised at what kind of impact can be had through Lifehacker-type articles. Dust specks if sufficiently large in nature are impactful, after all.

Dust specks don't cost a person time to consume. The also have no opportunity cost. Your articles on the other hand might have opportunity cost. Furthermore it's not clear that the articles have a positive effect.

As an aside the article's do have SEO advantages through their links that are worth appreciating even if the people who read them are't affected.

We have evidence that they are, as described in my comment here.

I don't see evidence that you succeed in getting people to another site via your articles. Do you have numbers?

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:22:41.887Z · LW(p) · GW(p)

I describe the numbers in my comment here about the only website where I have access to the backend.

Compare the article I put out on Lifehack to other articles on Lifehack. Do you think my article on Lifehack has better return on investment than a typical Lifehack article?

comment by Gleb_Tsipursky · 2015-11-19T02:27:37.987Z · LW(p) · GW(p)

My take is that the goal is to give more useful advice than what people are currently getting. For example, giving science-based advice on relationships as I do in this article is more useful than simple experience-based advice, which is what the vast majority of articles on self-improvement do. If it is better than why not do it?

Remember, the primary goal of Intentional Insights is not to market rationality per se, but raise the sanity waterline as such. Promoting rationality comes only after people's "level of sanity" has been raised sufficiently to engage with things like Less Wrong and CFAR. More in my comment on this topic.

comment by Gleb_Tsipursky · 2015-11-19T02:15:20.007Z · LW(p) · GW(p)

Thanks for the support! We are trying to fill a pretty big void.

However, just to clarify, marketing is only one aspect of what we are doing. We have a much broader agenda, which I describe in my comment here. And I'm always looking for ideas on how to do things better!

comment by Vaniver · 2015-11-18T22:23:52.512Z · LW(p) · GW(p)

To me, it seems that CFAR was supposed to be working on this sort of stuff, and they have not accomplished all that much. So I think, in a way, we should be welcoming the fact that Gleb T./International Insights are now trying to fill this void.

This is not at all obvious to me. If someone tries something, and botches it, then when someone else goes to try that thing they may hear "wait, didn't that fail the last time around?"

Replies from: None, Gleb_Tsipursky
comment by [deleted] · 2015-11-19T21:31:41.493Z · LW(p) · GW(p)

This is not at all obvious to me. If someone tries something, and botches it, then when someone else goes to try that thing they may hear "wait, didn't that fail the last time around?"

This seems like a full general counterargument against trying uncertain things...

Replies from: Vaniver
comment by Vaniver · 2015-11-20T04:27:26.928Z · LW(p) · GW(p)

Agreed that it's a fully general counterargument. I endorse the underlying point, though, of "evaluate second order effects of success and failure as well as first order effects," and whether or not that point carries the day will depend on the numbers involved.

comment by Gleb_Tsipursky · 2015-11-19T04:39:58.100Z · LW(p) · GW(p)

I'd be curious to hear why you think Intentional Insights is botching it if you think that is the case - it's not clear from your comment.

However, I disagree with the premise that someone botching something means other people won't do it. If that was the case, then we would have never had airplanes, for example. People will be actually more likely to try it in order to do something better because they see something has been done before and know the kind of mistakes that were made.

comment by Gunslinger (LessWrong1) · 2015-11-18T14:57:32.084Z · LW(p) · GW(p)

Ohh, rationalist drama... is that gold I smell?

LW is a fairly mature site and I'm sure somebody did this already, in one variation or another, both marketing, and discussing said marketing. Can any veteran confirm or deny my speculation?

(I have a longer post saved, but in the middle of it I just thought that I'm re-inventing the wheel.)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T02:18:27.183Z · LW(p) · GW(p)

I had a conversation about this with many rationalists, including CFAR's people, and there hasn't been many efforts or discussion of said efforts. Here's probably the most widely-known example, and Liron is currently on our Advisory Board, though he'll be stepping off soon due to time limitations while still providing informal advice. However, I'd love to learn about stuff I don't know about, so if someone has some stuff to share, please let me know!

comment by OrphanWilde · 2015-11-18T15:09:48.105Z · LW(p) · GW(p)

Ah. My response to you was in error. You approve.

The issue isn't that he's marketing rationality. The issue is that what he's doing has nothing to do with rationality, it's pseudo-rationalist babble. He markets rationality in the same way that homeopathy markets healthcare. His marketing doesn't add an easily-corrected flawed version of rationality, it is a ritual designed to exorcise the demons of bias and human suffering and he literally promises to help you find a purpose in life through science. Which is to say, what he's doing isn't marketing, it's religion.

Replies from: Viliam, Gleb_Tsipursky
comment by Viliam · 2015-11-18T21:57:49.181Z · LW(p) · GW(p)

what he's doing has nothing to do with rationality, it's pseudo-rationalist babble

Some people could say the same thing about CFAR. So, let's focus on specific details how these two are different.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T04:44:44.796Z · LW(p) · GW(p)

CFAR is looking for the correct approach to teaching rationality. InIn claims it has already found it, and will provide it to you for only two easy payments of...

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T07:26:16.833Z · LW(p) · GW(p)

Please avoid arguments by association. It's an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged. I would like to see where you believe, in specific and concrete ways, we say that we sell rationality for two easy payments. In fact, the vast majority of our content is free - the only paid product is a book I wrote, and we distributed that for free for a while as well, and also an intense online class I am teaching and spending a lot of my personal time on. Thanks!

Also, please point out where InIn claims to have found the correct approach. In my comment below, I specifically discuss how we are looking for new ideas to do things better. Maybe you are aware of something I am not, however, and I would be happy to update my beliefs.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T13:37:39.945Z · LW(p) · GW(p)

This is glibness, not "Dark Arts".

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T18:05:12.244Z · LW(p) · GW(p)

Can you please write a full response to my comments, and avoid using hand-waving as a Dark Arts technique. Thanks!

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-19T18:42:56.360Z · LW(p) · GW(p)

Sighs

My immediate response is to say "No", but I will explain why I am inclined to just write "No":

Because regardless of whether what you wrote is false, spending time denying it grants it validity. Your response to my opening validates it; had you never said a word, everybody would have dismissed me as an asshole. Your inability to not respond to everything cripples your ability to effectively deal with this situation.

I'll include my response to your other comment here because it's related:

You're worried about "Anchoring", but forget that you're already anchored here - everybody already has an opinion of you, and I didn't actually accuse you of any untoward behavior that would cause people to update their opinions. The closest I came was implying that Ella was a sockpuppet, something which I doubt anybody took seriously until you defended her in a typically exaggerated fashion, then spent time in this forum making a show of apologizing to somebody you supposedly know in real life, instead of talking to her there, where it makes more sense. It was obviously a show put on for our benefit.

Now, before you jump in and start complaining about me accusing you of stuff again: I'm teaching you something. Stop to learn it before you respond. If I'm accusing you of something openly, others are thinking it, and not saying a word because they're too polite. Consider the implications of that. Even if you're not guilty, the fact that I think you are means you are failing in some substantial way of managing the way you're interacting with people.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-19T23:10:12.110Z · LW(p) · GW(p)

Good point (and I'm upvoting this comment of yours). I appreciate you helping me update on my weakness in not responding to everything. Silence is a hard virtue, and I am a teacher/explainer at heart, which is why I became a professor and started Intentional Insights. It's very hard for me to not explain, but I will try to restrain myself more in the future.

FYI, I did apologize to Ella (here is her Twitter account) in real life, but I also wanted to demonstrate to her and to others as well that Less Wrong is a forum for interactions about these sorts of issues. I think it's a valuable signal of how we should act as a community.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-20T14:22:27.517Z · LW(p) · GW(p)

I also wanted to demonstrate to her and to others as well that Less Wrong is a forum for interactions about these sorts of issues. I think it's a valuable signal of how we should act as a community.

The mistake there is that when you put on a show, people notice it, and question why you're putting on a show. You're always sending (at least) three signals: The first signal is the one you're attempting to send. The second signal is how you're sending it. The third signal is the context of your sending it. If you're only cognizant of the signal you're deliberately sending, you're almost certainly sending the wrong overall signal.

The "How" in this case is, roughly, putting on a show. You're telling people you're trying to manipulate them.

The context in this case is, roughly, an accusation. Combined with the manipulation, you're telling people you consider the accusation serious and potentially dangerous, and are manipulating them to not take it seriously.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-11-23T06:28:39.652Z · LW(p) · GW(p)

I think you're talking about putting on a show about putting on a show about putting on a show. It's at the level of meta that I'm struggling to breathe in this high atmosphere. I suggest we wrap up this thread.

comment by Gleb_Tsipursky · 2015-11-19T00:15:31.350Z · LW(p) · GW(p)

Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Instead, I would suggest you clarify how a science-based book written by a scholar of meaning and purpose, which I happen to be, and endorsed by numerous prominent people, is not helpful for people finding a personal sense of purpose in life as informed by science-backed methods of doing so. Thanks!