Can you improve your intelligence with these types of exercises?

post by icompetetowin · 2021-06-01T21:13:48.656Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    2 ChristianKl
None
6 comments

Hi, I write and find exercises on biases to help myself and others think better.

For example: 
Bob is an opera fan who enjoys touring art museums when on holiday. 
Growing up, he enjoyed playing chess with family members and friends.

Which situation is more likely?

  1. Bob plays trumpet for a major symphony orchestra.
  2. Bob is a farmer.

My question to the LessWrong community:
Does it make sense to learn like this?

Answer to the example (and other exercises):
https://newsletter.decisionschool.org/p/decision-making-bias-base-rate-fallacy
 

Answers

answer by ChristianKl · 2021-06-03T00:04:22.928Z · LW(p) · GW(p)

The question you have here looks to me to be underspecific as it doesn't say anything about the selection algorithm but speaks about abstract people. I'm not sure that it's meaningful to speak about likelihoods if there's no selection algorithm.

The general research on intelligence improvement suggests that most exercises targeted at intelligence improvement don't improve general intelligence and at best a narrow subskill. 

In this case the goal of the exercise seems to be about teaching the concept of the base rate fallacy. 

_________________

Let's look at example 2: 

You suggest that people should look at the general base rate for divorce. That's stupid given that information for your demographic is available. 

Whether or not to make a prenup is a complex decision. The argument that divorces without a prenup are harder doesn't automatically indicate that prenups are good. If someone puts their commitments in beeminder, breaking them becomes more costly. That's the point of making the commitment in beeminder. The same way marriage is also a commitment device. It's plausible that weakening it's power as a commitment device is worthwhile but it's a complex issue. 

_________________

If you look at the example of cancer and faith healing, I think the way it proposes to reason about that example is bad as it's basically about appeal to authority. 

Given the policy discussions of the last decade it's quite ironic to use the American Cancer Society here as an authority because it advocates treatments that don't seem to increase survival over the base-rate. The US has historically a higher rate of diagnosing people with cancer, a higher cancer survival rate and the same amount of cancer deaths as comparable countries. 

This lead the Obama administration to decide to reduce the amount of people getting diagnosed with cancer by reducing testing and the American Cancer Society was against that. 

I think state of the art rationality to that question would be:

Taboo the phrase "uncurable disease". It's bad ontology that's confuses the nature of the disease with the nature of the treatments we have for it. The phrase suggest a certainty about the nature of disease that just doesn't exist in the real world and parts of Western medicine are responsible for that. 

Instead of thinking in terms of "statistical certainty" think in terms of uncertainty. The world is very uncertain. When it comes to amputees we don't assume that there's a "statistical certainty" that some amputees will go their limbs back. 

The paradigma of cancer of two decades ago was flawed and point to examples of people who survived cancer was an argument against that paradigma. A lot more cancer diagnoses resolve themselves without treatment then the American Cancer Society wants to admit. 

The faith healing argument is one about the gods in the gaps. There are gaps in the model of how cancer develops (and especially the model of two decades ago).

After accepting those gaps the question is whether there's reason to believe that faith healing is responsible for some of those gaps. The faith healer in question didn't point us to a controlled study to read as evidence. They also didn't point us a gears model of why we should believe that faith healing works. 

That's the difference between them and the American Cancer Society. The American Cancer Society has gears models for their treatment recommendation and some controlled studies to back them up (controlled studies where you don't treat people with cancer are however hard to get past ethical review boards and thus the evidence we have isn't that great). 

_____________________

After looking at the content of the question, what's the underlying problem: "The reader isn't asked to make a real decision, they are just expected to go "boo faith-healers; yay authority". Given that's the general current, I wouldn't expect the reader to learn anything besides "boo outgroup, yay mainstream authority".

If you want to effect real decision making you should have examples that aren't just about rounding down to stereotypes. Examples should confront the reader not in a way where simply yielding to stereotypes provides the right answer.  

The lesson should be that reality is complex and not that it's easily solved with stereotypes. 

comment by icompetetowin · 2021-06-03T21:50:55.974Z · LW(p) · GW(p)

Thank you for the feedback and the interesting story about the American Cancer Society!

Do you have a blog or place where you write about that type of "background" information/history?

I still have a lot to learn and you are right not everything is black and white, reality is complex. 

At the start, you mentioned "selection algorithm", could you expand on that?

Thanks!

Replies from: ChristianKl
comment by ChristianKl · 2021-06-18T22:42:54.487Z · LW(p) · GW(p)

One example of a selection algorithm would be: "You go to a bar in Austin, you are talking with a person and you learn that for the person X is true. Is it more likely that A or B"

This setup allows me to picture an event happening in the real world and events have likelihoods. 

While the ambiguity is unlikely to lead to a misunderstanding in this case, there are plenty of decision theoretic problems where it matters. When creating practice exercises you want them to be specific and without ambiguity. 

I have now write up the story of cancer and have a draft, I'll share it with you.

6 comments

Comments sorted by top scores.

comment by mukashi (adrian-arellano-davin) · 2021-06-01T22:55:20.750Z · LW(p) · GW(p)

Fanastic idea! I have just signed up. 

Two comments:

  1. It would be better in my opinion if you had a bullet list with the different options and you could click on the chosen answer.
  2. You should add a way for people to send you comments about the specific examples

For instance, I think a proper explanation of the base fallacy rate should include the proper bayesian analysis. In fact, let's try to do that with the example that you give:

Numbers taken from a quick google search. We are going to suppose that this takes place in US

Farmer population in the states ~2.5 * 10^6

Number of symphony orchestra 1224

Number of trumpets in a symphony orchestra 4

Total number of trumpets in a symphony orchestra ~5000

Odds farmer/trumpet player = 500:1

This gives us the prior odds.  We have to multiply the prior odds by the likelihood ratio. This is tricky, but let's put some numbers anyway just for the sake of explanation. For instance, we could assume that 80% of trumpet players are keen on Opera, 50% enjoy visiting museums and 20% grew up playing chess. We will assume also that for farmers, the numbers are 10% opera, 20% museums, 5% chess (this is assuming independence among the different factors, which is probably a bit of a stretch)

For farmers: 10% enjoy opera, 20% enjoy visiting museums, 5% grew up playing chess = 0.001

Trumpet players = 80% opers, 50% enjoy visiting museums, 20% chess = 0.08

L_r farmer/trumpet = 0.001/0.08 = 0.0125 (this is the likelihood ratio)

Posterior = 500:1 * 0.001:0.08 = 6.25

In this case, we can see that it is around 6 times more likely that the person is a farmer than a trumpeter. 

However, if in our model we make the number of farmers who like opera just 1%, in this case, the posterior would favour the trumpeters.

Replies from: SimonM, icompetetowin
comment by SimonM · 2021-06-02T07:31:12.096Z · LW(p) · GW(p)

For farmers: 10% enjoy opera, 20% enjoy visiting museums, 5% grew up playing chess = 0.001

I doubt these are independent.

Total number of trumpets in a symphony orchestra ~500

I realise you have the math right further down, but this should be ~5000. (I assume typo)

Replies from: adrian-arellano-davin
comment by mukashi (adrian-arellano-davin) · 2021-06-02T12:25:10.929Z · LW(p) · GW(p)

yes typo. Thanks! Corrected

Good point about the independence, I added a note. Do you think it would be possible to come up with a better estimate somehow of the likelihood ratio? 

Replies from: SimonM
comment by SimonM · 2021-06-03T06:46:21.272Z · LW(p) · GW(p)

I would do additional conditioning. So P(opera | farmer), P(museum | opera, farmer), P(chess | museum, opera, farmer), etc.

My guess would it would look something like:

P(opera | farmer) = 5% (does anyone actually like opera?)
P(museums | opera, farmer) = 95%
P(chess | m, o, f) = 40%

So 5% * 95% * 40% = 1.9% of farmers...

P(o | t) = 80%
P(m | o, t) = 50%
P(c | m, o, t) = 20%

So 80% * 50% * 20% = 8% of trumpet players...

Which is a likelihood ratio ~.25 so I end up with something like 125 to 1 that we're talking to a farmer.

comment by icompetetowin · 2021-06-01T23:13:02.958Z · LW(p) · GW(p)

Thank you for the feedback!

In terms of convenience (doing the exercises on the site), I'm looking into it.

Your comment about allowing people to send comments for specific examples is a great idea.

Would you mind if I included your explanation of the example on the site?

 

Replies from: adrian-arellano-davin
comment by mukashi (adrian-arellano-davin) · 2021-06-02T01:29:31.783Z · LW(p) · GW(p)

please do! let me know if I can lend a hand somehow ;)