On Cognitive Distortions 2018-03-25T22:44:10.482Z
On Defense Mechanisms 2018-03-04T18:53:08.650Z
Rationality: Abridged 2018-01-06T00:20:04.919Z


Comment by Quaerendo on Does Thinking Hard Hurt Your Brain? · 2018-04-30T13:00:54.542Z · LW · GW

For me, contemplating Zen koans for too long can make my brain "hurt".

Does a dog have Buddha-nature or not? Show me your original face before your mother and father were born. If you meet the Buddha, kill him. Look at the flower and the flower also looks.

I find it interesting because, unlike coding a program or solving a math equation or playing chess, it doesn't seem like koans have a well-defined problem/goal structure or a clear set of rules and axioms. Some folks might even call them nonsensical. So I'm not sure to what extent the notion of (in)efficient optimization is applicable here; and yet it also appears to be an example of "thinking hard". (Of course, a Zen instructor would probably tell you not to think about it too hard.)

Comment by Quaerendo on Is Rhetoric Worth Learning? · 2018-04-12T13:14:36.184Z · LW · GW

For what it's worth, I have three years' experience with university-level competitive debating, specifically with the debate format known as British Parliamentary (which is the style used by the World Universities Debating Championship or WUDC). Since many people are unfamiliar with it, I'll briefly explain the rules: one BP debate comprises four teams of two members each. All four teams are ranked against each other, but two of them must argue for the affirmative ("government") side of the issue and the other two for the negative ("opposition") side. The objective is basically to persuade the adjudicators why your team should win. In this format you do not get to research the topic beforehand, and you don't even know what you are going to debate until 15 minutes before the debate starts -- which means that it requires a lot of quick brainstorming and improvisation. And since each individual speaker gets only 7 minutes to make their case, you have to prioritize the most important content and structure it coherently.

In our training sessions we actually do not study classical rhetoric. So I'm not familiar with terms like elocutio, dispositio or pronuntiatio -- although I can definitely recognize clear delivery, organized structure, and appeals to logic as important principles of varsity debating. I think there are skills one can learn from this kind of public speaking:

  • The target of persuasion in BP is the judge, who we regard as a layman, or "average informed voter". This means that he or she has a high-school education and reads a newspaper once a while, but is not an expert on any particular subject. Furthermore, the judge is not supposed to have a bias in favor of left-wing or right-wing arguments (but is moderate by the standards of a Western liberal democracy). This rule encourages speakers to use arguments that will appeal to a broad segment of people.
  • The persuasiveness of a speech is evaluated not based on how impressive your style is, but on how compelling your arguments are. A good argument is one that is (a) believable, i.e. the premises are acceptable and the conclusion follows logically; and (b) relevant to the concerns of the debate, i.e. something that counts in favor of your side and against the other side. It is up to you as a speaker to explain why a claim you make is likely to be true and why it implies that your team should win. This encourages speakers to be clear about what it is that they actually stand for, and why the rest of us should care.
  • Due to the short preparation time and the fact that judges do not fact-check the participants' speeches using Google, it doesn't make much sense to cite academic papers or statistics in one's speech. Additionally, to base an argument on a single example makes it vulnerable to refutation by counterexample. So if you can neither say "Studies show that in 73% of cases, X happens..." nor say "Last year, there was a case where X happened..." and get away with it, then what can you say? Well, a useful trick here is to remember that debating entails a comparison between two worlds: you can claim that "X is more likely (or less likely) to happen if we enact this policy than if we don't". Then you need argumentation to explain why that is the case, starting from premises that most people would accept as common knowledge or some kind of first principle about how the world works. You also need to explain why, assuming your argument is correct, this justifies/warrants the kind of action or conclusion you are proposing. This aspect of BP debating encourages speakers to think in terms of general rules rather than specific data points.
  • The interactive nature of the game requires you to respond to the arguments presented by the other teams. A rebuttal works the same way as an argument, but with the opposite intention: you explain why the claims made by the other side are either (a) unrealistic; or (b) unimportant, perhaps because they are not mutually exclusive with your claims, or because they are of such little consequence that they are outweighed by other factors. Thus, speakers have to simultaneously see both sides of the dispute in order to isolate the core tensions and advocate successfully for their side.
  • One quirk of this kind of format is that you don't get to choose beforehand which side of the topic you will be arguing for. This means that you will occasionally be required to defend positions that you personally disagree with, and poke holes in the ones you cherish. This is great for challenging confirmation bias and inviting speakers to consider different points of view. Even if you don't radically change your worldview, you will at least develop a greater understanding of the other side.

Of course one could also criticize this type of debating. Firstly, it inculcates a competitive spirit rather than a spirit of collaborative truth-seeking. Secondly, as a game it is in some ways detached from the nuances and practicalities of persuasion in the "real world", where things like statistical figures and budgetary limits and constitutionality do matter. Finally, one might become too adept at constructing plausible-seeming justifications for any conclusion one likes regardless of the actual evidence -- and this Eliezer warned us about:

And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich's "dysrationalia" sense of stupidity.
You can think of people who fit this description, right?  People with high g-factor who end up being less effective because they are too sophisticated as arguers?

When you start with a given position on a topic (let's say you have to argue against legalizing recreational drugs) and construct arguments in its favor, you are essentially engaging in rationalization instead of rationality.

So do the benefits outweigh these risks? I don't know.

Comment by Quaerendo on On Defense Mechanisms · 2018-03-08T16:39:09.829Z · LW · GW
In that case in what sense does he dislike his professor. From your example, him disliking his professor seems at be a free-floating XML tag.

I suppose it can be explained by the liking/wanting vs. approving distinction (you can have a feeling that you disapprove of) or Alicorn's idea of repudiating one's negative characteristics. And then the cognitive dissonance created by you giving an apple to someone you dislike may be resolved by shifting your attitude of the person in a positive direction -- so in this sense, Undoing is a strategy to reduce disapproved/repudiated properties.

This is especially notable with the way projection/reaction formation is discussed in practice: "He's opposing position X because he secretly supports it."

Interestingly, there is research showing that some people who oppose homosexuality or gay marriage do in fact show an unconscious attraction to the same sex -- see e.g. Weinstein et al. (2012). However, in this case I would agree that "he overtly opposes X because he covertly supports X" is the wrong way of looking at it; rather, he (the ego in Freudian terms) disapproves of his desire (the id). Of course, this doesn't imply that everybody who opposes X is doing so as a defense mechanism.

Edit: To clarify, I'm certainly not implying that homosexuality is a negative characteristic; just that some people are raised in a culture where it is stigmatized, and so they internalize the value that it is. The specific claim made by the Weinstein et al. paper is as follows:

  • Some children have parents who don't support their autonomy. Some of those parents happen to also hold negative attitudes toward homosexual individuals. The combination of these two factors results in the children seeking approval from their parents by suppressing the needs/wishes/beliefs etc. that aren't supported by their parents. The researchers did a survey of participants' explicit views about homosexuality and their sexual orientations, and also measured the participants' implicit sexual orientation using a reaction time task. They found a discrepancy between explicit and implicit sexual orientations, especially when parents showed low autonomy support, and also found that this discrepancy was related to greater self-reported homophobia and endorsement of anti-gay policy positions. The researchers conclude: "...these effects can be understood, at least in part, as a defensive response to maintain the suppression of self-relevant, but threatening, information" (p.829).

I decided to edit this comment instead of replying directly to tempus' comment below, as I did not perceive that commenter to be acting charitably.

Comment by Quaerendo on On Defense Mechanisms · 2018-03-06T16:24:27.136Z · LW · GW

Vernor doesn't give the professor an apple because he dislikes the professor per se, but because he feels guilty about his dislike for the professor, which he tries to "fix" by giving a gift -- this works exactly because giving a gift usually indicates liking someone (putting aside other motives, such as ingratiation).

A different example of the "Undoing" defense mechanism would be an abusive alcoholic father who buys his kids lots of Christmas presents (see the sources here and here).

In psychoanalytic theory, these various phenomena are related in that they serve the function of protecting one's ego. But if you think that's a poor way of conceptualizing them, I'd be curious how you think we could do better.

Edit: For example, gworley's comment conceptualizes them as defending one's prior probability.

Comment by Quaerendo on Rationality: Abridged · 2018-02-20T19:50:41.769Z · LW · GW

Thanks for pointing it out. I've fixed it and updated the link.

Comment by Quaerendo on Rationality: Abridged · 2018-02-20T19:47:21.317Z · LW · GW

Thanks, I'm glad you found it useful!

The reason I didn't link to LW 2.0 is because it's still officially in beta, and I assumed that the URL ( eventually change back to (but perhaps I'm mistaken about this; I'm not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.

Comment by Quaerendo on The Principled Intelligence Hypothesis · 2018-02-14T18:51:37.901Z · LW · GW

I'm wildly speculating here, but perhaps enforcing norms is a costly signal to others that you are a trustworthy person, meaning that in the long-term you gain more resources than others who don't behave similarly.

Comment by Quaerendo on What are the Best Hammers in the Rationalist Community? · 2018-01-26T22:13:05.300Z · LW · GW

I cannot say much about CFAR techniques, but I'd nominate the following as candidates for LW "hammers":

Of course, the list is not exhaustive.

Comment by Quaerendo on Rationality: Abridged · 2018-01-09T13:28:16.802Z · LW · GW

Thanks a lot for doing this!

Comment by Quaerendo on Rationality: Abridged · 2018-01-09T13:26:45.857Z · LW · GW

Indeed ; )

Comment by Quaerendo on Rationality: Abridged · 2018-01-06T20:27:10.443Z · LW · GW

Thanks for the feedback.

Here's the quote from the original article:

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."

One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don't think it's entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it's not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are "doing something wrong".

Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann's Agreement Theorem actually says. So I will amend that part of the text.

Comment by Quaerendo on Rationality: Abridged · 2018-01-06T17:31:38.422Z · LW · GW

Thanks for the kind words :) I agree with what you're saying about the 'wall-of-text-iness', especially on the web version; so I'm going to add some white space.

Comment by Quaerendo on Why everything might have taken so long · 2018-01-04T14:09:52.143Z · LW · GW

Seeing this list made me think: If these factors contributed to past humans taking so long to invent things, perhaps we could try to influence them in our current era in order to accelerate progress.

Some of them are already changing, for example population growth or the trend in decreasing absolute poverty. However, there seems to be an opportunity to direct more deliberate effort into making headway in the following areas:

  • Concepts: different human languages are known to have words that are hard to translate into other languages; studies have found that people with different native tongues think differently, etc. So it is possible that we are missing concepts that already exist but haven't spread universally. Not having these concepts means that we cannot think using them. (The same goes for specialized academic fields.)
  • External thinking tools: human-computer interfaces and augmented reality can help here.
  • Records: we seem to be getting better at this, but see also the vulnerability of digital data. In addition to finding new ways of storing data for the long-term, we also need to improve the communication of data across geographic and linguistic regions of the world. For example, a lot of academic articles are never translated into English.
  • Enhancing human capabilities: genetic engineering, nootropics, cyborgization, etc. can potentially improve our mental hardware (although currently they are ineffective or risky).
  • Social costs: "nerd culture" seems to be more accepted now, which I guess is encouraging. However, there are still people who oppose technological advances (like GMOs, stem cell research, nuclear energy, etc.) due to ethical concerns or just general skepticism of science.
  • Value of invention; orders of invention; prerequisites of inventions; etc. : I'm less sure about how we can modify these. Perhaps the more we invent, the more we can invent, implying that invention today should be easier than at any prior point; however, it is also possible that most of the low-hanging fruit have already been harvested. Any thoughts?
Comment by Quaerendo on Mapping Another's Universe · 2017-11-17T23:52:25.395Z · LW · GW

This technique of "place yourself in the other's shoes and visualize their 'universe' from inside" might be useful not only for avoiding cases of the typical mind fallacy or false consensus effect (whereby you assume your epistemic and behavioral patterns are "normal"), but also the correspondence bias (whereby you attribute others' actions to their innate character traits and your own to your particular situation). What these cases have in common is that they are often self-serving: you get to fit in socially, plus excuse your social infractions. Trying to understand different universes can de-center yourself from your attention.

Comment by Quaerendo on The Five Hindrances to Doing the Thing · 2017-10-02T06:56:02.364Z · LW · GW

I agree with magfrump on using citations, and I'm also interested in seeing how different anti-akrasia frameworks relate to each other. For example, the Motivation-Opportunity-Ability model comes to mind, which seems strictly simpler than the "five hindrances" presented here. But I guess there is the question of how specific vs. how general we want to be: too fine-grained and the model becomes impractical to use; too coarse-grained and it loses effectiveness due to being vague.

Comment by Quaerendo on Value Arbitrage · 2017-09-21T11:37:22.238Z · LW · GW

Yes, by "learning things in a certain order" I include the case of learning Esperanto before learning Spanish (as opposed to doing it the other way around, which would presumably be less efficient for a native English speaker who wants to learn Spanish).

Comment by Quaerendo on Value Arbitrage · 2017-09-21T10:07:59.791Z · LW · GW

I agree with the first part of your comment, but I'm not sure about the Esperanto example.
Arbitrage refers to exploiting a difference in price of the same good (usually fungible, e.g. currencies, shares or commodities) in two markets to make a financial profit.

OP seems to be talking about transfer of learning, and you seem to be talking about learning things in a certain order so that the total investment is lower. Both of these are good things, but I'm not sure either fits the meaning of the word arbitrage.