Stupid Questions February 2017

post by Erfeyah · 2017-02-08T19:51:22.821Z · LW · GW · Legacy · 105 comments

Contents

105 comments

 

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

105 comments

Comments sorted by top scores.

comment by Erfeyah · 2017-02-08T20:33:55.773Z · LW(p) · GW(p)

I am getting to grips with the basics of Bayesian rationality and there is something I would like to clarify. For this comment please assume that whenever I use the word 'rationality' I mean 'Bayesian rationality'.

I feel there is too strong a dependency between rationality and available data. If current understanding is close to the truth then using rational assessment will be effective. But in any complex subject the data is so inconclusive that the possibility that we can not even conceive the right hypothesis, to rationally choose it from its alternatives, is quite high. No? I will give a simplified example.

In this post it is said:

Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

It then goes on to explain how we rationally choose between the options. That is all good. Let's suppose though that the actual cause of the headache is psychosomatic. And let us also suppose that the culture in which the experiment is taking place does not have a concept of psychosomatic causes. They just always think it is either cancer or a cold. And most of the times it is. Is it not true that a rational assessment of the situation will fail? How would someone with a sound rational mind approach that situation (in the world of the thought experiment)?

This is dealt with in science by not accepting explanations as truths until they are confirmed experimentally (Well.. in an ideal science cause in reality scientists jump into philosophical speculation all too often). But rationality can only be effective if we assume that we are quite close to an accurate understanding of nature. And I hope you will agree that the evidence does not indicate that at all.

Am I missing something here?

Replies from: moridinamael, MrMind, Douglas_Knight, Vaniver, MaryCh, AmagicalFishy
comment by moridinamael · 2017-02-08T21:37:00.207Z · LW(p) · GW(p)

I try to always include an extra option which is just "everything I haven't thought of yet", and the prior for that option reflects how confident you feel about your mastery of the domain. Even a medical expert must always admit some small likelihood that the true cause is neither a tumor nor a cold, because medical experts know that medical science evolves over time.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-08T22:24:08.533Z · LW(p) · GW(p)

That is good. But how would you assign the probability of 'everything you haven't thought of yet' in a problem? You would have to base it again on data; which takes us back to the same question. Data can of course be sufficient for fields that are advanced to a point of overwhelming clarity. But when we get into sufficiently complex subjects the rational approach would be to say that my 'everything I haven't thought of yet' prior is so large that the only rational answer is "I don't know". Would you agree?

Replies from: moridinamael, ChristianKl
comment by moridinamael · 2017-02-08T23:00:42.121Z · LW(p) · GW(p)

Yes, you can't get away from data. And in fact it is often rational to put a dominant prior on "whatever the explanation is, it probably isn't one of the ones that I've managed to come up with". There's nothing magic about Bayes theorem, it can't create new information, but when properly used it helps avoid overconfidence and underconfidence.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-08T23:16:23.492Z · LW(p) · GW(p)

Yes, this is clear, thank you. I recall rational discussions in LW with expositions on ontology, cosmology, sociology, the arts etc. in which probabilities are thrown around as if we know a lot, which makes me think that this point has not been fully understood. On the other hand whenever I talk to people on a one to one bases (like in this comment thread) people seem lucid about it. Will have to address it where I see it. Thanks for your input! :)

comment by ChristianKl · 2017-02-09T10:49:20.101Z · LW(p) · GW(p)

"I don't know" means "I can't predict the outcome better than a dart throwing monkey". Doctors usually know more than that.

You seem to have a lot of black and white terms like "right hypothesis", "accepting explanations as truths" and "I don't know" in your thinking.

In the Bayesian perspective 0 and 1 aren't probabilities.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T11:25:14.516Z · LW(p) · GW(p)

"I don't know" means "I can't predict the outcome better than a dart throwing monkey". Doctors usually know more than that.

The example I chose is a bit misleading. I am just using it to indicate the problem though. You are thinking of 'doctors' as the doctors in our culture but I was hypothesising a situation in which the field is much more primitive. It is interesting to me to see how rationality is applied to a field where data is unavailable, scarce or inconclusive.

You seem to have a lot of black and white terms like "right hypothesis", "accepting explanations as truths" and "I don't know" in your thinking.

I don't see the problem of saying 'I don't know' when the data is obviously insufficient for me to judge. I would actually consider it counter productive to give a probability in this case, as it might create the illusion that I know more than I actually do. In this sense my 'I don't know' is not a value of 0 but admitting that there is no reason to use rational jargon at all.

The terms 'right hypothesis' and 'truth' are used, in the above comment, in the context of what is considered a good enough truth in science. You are right that this can be confusing if we get into the epistemological details though I thought it was sufficient for the purpose of communicating in this thread. I can change it to 'scientific fact' and we should be ok?

Does that make sense?

Replies from: ChristianKl
comment by ChristianKl · 2017-02-09T11:42:59.196Z · LW(p) · GW(p)

After Popper science isn't about establishing truth or the right hypothesis.

I would actually consider it counter productive to give a probability in this case, as it might create the illusion that I know more than I actually do.

If you can do better than a random guess (the dart throwing monkey) than you have knowledge in the Bayesian sense.

There could be situations where you really don't know more than the dart throwing monkey and were it thus doesn't make sense to speak about probability but in most cases we know at least a little bit.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T12:09:47.528Z · LW(p) · GW(p)

After Popper science isn't about establishing truth or the right hypothesis.

I am not familiar with Popper but I would agree anyway. I will be more careful with my terms. Would 'scientific fact' work though? I think it does but I am open to being corrected.

If you can do better than a random guess (the dart throwing monkey) than you have knowledge in the Bayesian sense.

[1] What if a rational assessment of inconclusive data weighs you towards the wrong direction. Wouldn't you then start doing worse than the dart throwing monkey?

There could be situations where you really don't know more than the dart throwing monkey and were it thus doesn't make sense to speak about probability but in most cases we know at least a little bit.

I would challenge your 'in most cases' statement. I would also challenge the contention that a little bit is better than nothing according to [1].

Replies from: ChristianKl, gjm
comment by ChristianKl · 2017-02-10T09:02:45.634Z · LW(p) · GW(p)

Would 'scientific fact' work though?

No. Everything in science is falsifiable and open to challenge.

[1] What if a rational assessment of inconclusive data weighs you towards the wrong direction.

It's certainly possible to be completely deceived by reality.

Whenever you act where an outcome matters to you, you will take the expected outcomes into account. Even if you say "I don't know" you still have to make decisions about what you do about an issue.

Maybe http://slatestarcodex.com/2013/08/06/on-first-looking-into-chapmans-pop-bayesianism/ is worth reading for you. The kind of "I don't know that you advocate is what Scott calls Anton-Wilsonism.

I would challenge your 'in most cases' statement.

When doing credence calibration I don't get result that indicate that I should label all claims 50/50.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-10T12:34:13.560Z · LW(p) · GW(p)

No. Everything in science is falsifiable and open to challenge.

I understand and agree with that. I am just trying to find the term I can use when discussing scientific results. I thought 'scientific fact' was ok cause it includes 'scientific' which implies all the rest. But yes the word 'fact' is misleading. Should we just call it 'scientific result'? What do you recommend?

Maybe http://slatestarcodex.com/2013/08/06/on-first-looking-into-chapmans-pop-bayesianism/ is worth reading for you.

I can't stress enough how useful that link is to me as a new LW user. My criticisms are quite close to what David Chapman is saying and it is really nice to see how someone representative of LW responds to this.

The kind of "I don't know that you advocate is what Scott calls Anton-Wilsonism.

Discussing in LW is giving me the impression at the moment that I have to learn to talk in a new language. I have to admit that at the moment all the corrections you guys have indicated are an improvement on my previous way of expressing. Very exciting!

But this is a great opportunity to deepen my understanding of the language by practising it. Let me try to reformulate my 'I don't know' in the Bayesian language. So, what I mean by 'I don't know' is that you should use a uniform distribution. For example, you have attached the label of 'Anton-Wilsonism' to me according to what I have currently expressed. I could assume, if you are literally using this way of thinking, that you went through the process of considering a weight for the probability that I am an exact match of what Scott is describing and decided that based on your current evidence I am. This also implies that you have, now or in the past, assigned ratings for all the assumptions and conclusions made in Scott's two paragraphs (there are quite a few) and you are applying all these to my model. So:

  1. Did you really quantify all that or were these labels applied, as it commonly happens in humans, automatically?
  2. Do you think that my recommendation of creating your model using uniform distributions would be useful as we are going through the process of getting more evidence about each other.

It is a trivial, but indicative of an attitude, example that using my approach your action could change from writing (but most importantly thinking):

"The kind of 'I don't know' that you advocate is what Scott calls Anton-Wilsonism."

to

"Is the kind of 'I don't know' you advocate what Scott calls Anton-Wilsonism?"

When doing credence calibration I don't get result that indicate that I should label all claims 50/50.

I just learned (see comments bellow) that "I don't know" is not 50/50 but a uniform distribution. Could you give me a few examples of credence calibration as it happens from your perspective?

Whenever you act where an outcome matters to you, you will take the expected outcomes into account. Even if you say "I don't know" you still have to make decisions about what you do about an issue.

Indeed. This is practical. All I am saying is that we shouldn't confuse the fact that we need to decide when we need to decide with the belief that our ratings express truth. I think it perfectly possible to be forced by circumstances into making an action related decision but return the conceptualisation of the underlying assumptions to a uniform distribution for the purpose of further exploration. It is just being aware that you have a belief system, that you need it, but not fully believe in it.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-10T14:06:42.756Z · LW(p) · GW(p)

My criticisms are quite close to what David Chapman is saying and it is really nice to see how someone representative of LW responds to this.

No, I don't think what you are saying is close to what Chapman is arguing. Chapman doesn't argue that we should say "I don't know" instead of pinning probability on statements where we have little knowledge.

I understand and agree with that. I am just trying to find the term I can use when discussing scientific results. I thought 'scientific fact' was ok cause it includes 'scientific' which implies all the rest.

There are enough people who use terms like "scientific fact" without thinking in terms of falsificationism that it's not clear what's implied.

All I am saying is that we shouldn't confuse the fact that we need to decide when we need to decide with the belief that our ratings express truth.

To me your sentence sounds like you have a naive idea of what the word "truth" is supposed to mean. A meaning that you learned as child.

There are some intuitions that come with that view of the world. Some of those intuitions will come into conflict if you come into contact with more refined ideas of what truth happens to be and how epistemology should work. There are various philosophers like Popper who have put forward more refined concepts. Eliezer Yudkowsky has put forward his own concepts on lesswrong.

I just learned (see comments bellow) that "I don't know" is not 50/50 but a uniform distribution. Could you give me a few examples of credence calibration as it happens from your perspective?

For binary yes/no predictions the uniform distribution leads to 50/50. https://www.metaculus.com, http://predictionbook.com/ and https://www.gjopen.com/ do have plenty of examples.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-10T14:56:07.409Z · LW(p) · GW(p)

No, I don't think what you are saying is close to what Chapman is arguing. Chapman doesn't argue that we should say "I don't know" instead of pinning probability on statements where we have little knowledge.

Sorry. I meant my general criticisms (which I haven't expressed), not in the sense of our current discussion. I wasn't very clear.

To me your sentence sounds like you have a naive idea of what the word "truth" is supposed to mean. A meaning that you learned as child. There are some intuitions that come with that view of the world. Some of those intuitions will come into conflict if you come into contact with more refined ideas of what truth happens to be and how epistemology should work. There are various philosophers like Popper who have put forward more refined concepts. Eliezer Yudkowsky has put forward his own concepts on lesswrong.

I am not sure where you are getting that I "have a naive idea of what the word "truth" is supposed to mean.". Stating it is no justification. Pointing towards Popper or Yudkowsky is not justification either. You would need to take my statements that point towards my 'naivety' and deconstruct them so we can learn. I have from my side offered arguments and examples for the value of the 'I don't know' mentality and why it is useful but I feel you haven't engaged.

For binary yes/no predictions the uniform distribution leads to 50/50. https://www.metaculus.com, http://predictionbook.com/ and https://www.gjopen.com/ do have plenty of examples.

I'm afraid I am not in a position to argue about this as I have only partially understood it. You can read here.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-10T21:28:56.085Z · LW(p) · GW(p)

I meant my general criticisms (which I haven't expressed), not in the sense of our current discussion.

Chapman doesn't reject rationality but advocates transcending it. It's a different standpoint. You need to first adopt a framework to later transcend it. In Chapman view, LW type rationality is useful for people who move from Kegan 3 to Kegan 4.

I am not sure where you are getting that I "have a naive idea of what the word "truth" is supposed to mean.". Stating it is no justification.

If you actually refined your concept of truth, there a good chance that you could point to philosophers that influenced it. This would allow me to address their arguments to the extent that I'm familiar with their notions of truth and how the relate to LW rationality.

I have from my side offered arguments and examples for the value of the 'I don't know' mentality and why it is useful but I feel you haven't engaged.

I see your argument as "But this isn't truth" without any deep argument of what you mean by "truth" or signs that you went through the process of refining a notion of what it means for yourself.

You speak about "not fully believing" when the whole point of putting probabilities on the statements is that you don't fully know what's going to happen. There's the general mantra of "Strong opinions, loosely held." Starting a probability means that this is the likelihood that the information that's available in this moment warrants. It in no way implies that if other information is available in the next moment that the probability will stay the same. Constantly updating the probability as new information becomes available is part of the ideal of Bayesian rationality.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-11T00:56:06.972Z · LW(p) · GW(p)

Chapman doesn't reject rationality but advocates transcending it.

I do not reject rationality either. Why would I be here if I did so? I think you are misreading my contrarian approach as rejection.

If you actually refined your concept of truth, there a good chance that you could point to philosophers that influenced it.

Talking in terms of references bears the danger of attaching labels to each other. It is much more accurate expanding on points. I respect if you don't have the time for that of course.

Since you are asking though let's see where this comparison of readings take us. In terms of most Western philosophy I find the verbosity and tangle of self constructed concepts to be unbearable (though I have to clarify that the 2 pages of Popper that I read were perfectly clear). Wittgenstein's philosophical investigations is I believe a good antidote to a lot of the above. My study nowadays is focused on self observation, psychology, sociology, neuroscience as well as eastern philosophy ( I am not religious ). For example, you can find a perspective on truth by reading the full corpus of teaching stories/anecdotes of mullah Nasrudin.

I see your argument as "But this isn't truth" without any deep argument of what you mean by "truth" or signs that you went through the process of refining a notion of what it means for yourself.

A deep definition? Truth is reality. It can not be reached or expressed through rationality but parts of it can be approximated/modelled in a way that can be useful. For discussing practical rationality though isn't it enough to say that truth is a belief that corresponds to reality?

By this time I feel we have kind of lost the focus of our conversation. To recap, this was all about my comment on the possibility of rationally arrived beliefs being false due to the assessment of insufficient evidence. It was not meant as an attack on rationality but as constructive criticism and possible a way for me to be introduced to solutions.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-11T08:09:27.724Z · LW(p) · GW(p)

A deep definition? Truth is reality.

That's evading the question. Or pretending that it doesn't exist.

I can't spend time going deep into your argument when you use a naive definition of a term that can be stated in three words.

To recap, this was all about my comment on the possibility of rationally arrived beliefs being false due to the assessment of insufficient evidence.

Saying that there's a chance that it's false is no new issue but it's an issue that's already addressed. The whole point of putting a probability on a belief is that there's a chance that's false.

It's not any new concern. Probability is made for not knowing whether a belief is true or false but having uncertainty about it..

Replies from: Erfeyah
comment by Erfeyah · 2017-02-11T09:29:58.738Z · LW(p) · GW(p)

That's evading the question. Or pretending that it doesn't exist.

Interesting. I guess your approach of just saying that you understand something but not expressing it in the discussion is not evasive at all.

I will keep an open mind for future conversations because, believe it or not, I am open to learn from you, if there is something you can teach me. But at the moment you are just demonstrating a tendency towards 'name dropping' and 'name calling' which is not constructive at all.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-11T10:13:39.745Z · LW(p) · GW(p)

I'm speaking about processes of reasoning.

Saying "Truth is reality", tells me nothing about what it means for a probability to be true. It leaves everything important implicit.

In the Kegan model that Chapman uses handling concepts this way happens at level three. The step to level four is to actually dig deeper and refine one's notions to be more specific. To use notions that come from a internally consistent system instead of being the naive notions of the concepts of the kind that people take up while being in high school.

Chapman then goes and says: On the one hand I have the system of reasoning with probability and qualifiying my uncertainty with probability. On the other hand I also want to use predicate calculus and the system in which I can use predicate calculus is not the one that's ruled by Bayes rule. But that doesn't mean that one system is more true or more in line with reality. Both are ways to model the world.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-11T11:49:59.872Z · LW(p) · GW(p)

Thank you for engaging :)

For the next paragraphs I would like you to exercise humility and restrain your assessment of what I am saying until I have finished saying it. You can then assess it as a whole.

My definition of truth was not three words. It was a small paragraph. Let me break it down:

Truth is reality.

Now why is this useful? The first three words 'Truth is reality' acknowledges that there is existence and inevitably this X which you can call the world, nature, whole or reality (the one I used in this occasion as I found it cleaner of associations in our context) is inevitably equivalent to truth. Rational discussion, having as its basis the manipulation of symbols, is an abstraction of X and thus not X. Thus absolute truth is outside the realm of rationality. If you think this is what you describe as 'naive notions of the concepts of the kind that people take up while being in high school.' I can only say in my turn that your understanding seems naive to me.

It can not be reached or expressed through rationality but parts of it can be approximated/modelled in a way that can be useful.

Here I state that although the first three words describe the deepest level of truth we can focus on truth as can be expressed in rational terms (mathematics are included in this) because it is demonstrably useful. But we should not confuse this truth with reality. We can call it 'relative truth' vs 'absolute truth' or truth vs Truth or whatever you fancy as long as we clarify our terms so we can talk.

For discussing practical rationality though isn't it enough to say that truth is a belief that corresponds to reality?

And here we are in the domain where I can learn from you. The domain of being efficient at using rationality. Notice that this sentence is a question where you can respond with a better conceptualisation. Rereading this sentence I already think I see its shortcomings. How about: 'For discussing practical rationality should we say that truth is a belief of which we can observe or demonstrate its relation to reality?'. Or should we just stop using the word truth for now? I am up for that.

So to return to your statement:

Saying "Truth is reality", tells me nothing about what it means for a probability to be true.

Indeed. That is why I did not move the conversation towards this direction, you did. I would invite you to reread my original post.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-11T13:04:09.649Z · LW(p) · GW(p)

Now why is this useful? The first three words 'Truth is reality' acknowledges that there is existence and inevitably this X which you can call the world, nature, whole or reality (the one I used in this occasion as I found it cleaner of associations in our context) is inevitably equivalent to truth.

Let's take a statement like 2+2=4. It's not a statement about nature. It's a statement about how abstract mathematical entities relate to each other that are independent from nature.

I can reason about whether certain statements about Hilberts hotel are true even through there's no such thing as Hilbert's hotel in reality.

What you are saying looks to me like you didn't went through edge cases like this and decided whether you think statements surrounding Hilberts hotel shouldn't be called true. Going through edges cases leads to a refinement of concepts.

It rather sounds to me like you think that those edge cases don't really matter and the intuitions you have of the concept of truth should count.

Thus absolute truth is outside the realm of rationality.

You can't claim this and saying at the same time that someone's probability is false or wrong. You just defined truth in a way that it's an attribute of different claims.

How about: 'For discussing practical rationality should we say that truth is a belief of which we can observe or demonstrate its relation to reality?'. Or should we just stop using the word truth for now?

One way of dealing with a concept like this is to reference refined concepts like the concept of truth that Eliezer developed in the sequences.

Dropping the concept (in LW speak tabooing it) is another way. If your objection to believing that "X happens with probability P" isn't anymore that this might be false, what's the objection about?

Replies from: Erfeyah
comment by Erfeyah · 2017-02-11T19:14:13.971Z · LW(p) · GW(p)

Before I continue with the discussion I have to say that this depth of analysis does not seem relevant to the practical applications of rationality that I asked about in the original post. The LessWrong wiki states under the entry for 'truth':

'truth' is a very simple concept, understood perfectly well by three-year-olds, but often made unnecessarily complicated by adults.

This, 'relative truth' as I call it, would be perfectly adequate for our discussion before we started philosophising.

Nevertheless, philosophising is good fun! :)

So...

Let's take a statement like 2+2=4. It's not a statement about nature. It's a statement about how abstract mathematical entities relate to each other that are independent from nature.

I assume you are using 'nature' in the same way I use `reality'? If yes, it is absurd to say that these entities are independent of nature. Everything is part of nature. Nature is everything there is. It includes you. Your brain. Mathematics. The question you can ask is: Why does abstraction have these properties? Why do they sometimes describe other parts of reality? Does every mathematical truth have a correspondence to this other part of reality we call the physical world? These are all valid and fascinating questions.

You can't claim this and saying at the same time that someone's probability is false or wrong. You just defined truth in a way that it's an attribute of different claims.

I clearly stated that 'relative truth' can approximate/model parts of 'absolute truth' in a way that can be useful.

Dropping the concept (in LW speak tabooing it) is another way.

You definitely convinced me to be more careful when I use the word. Seriously.

If your objection to believing that "X happens with probability P" isn't anymore that this might be false, what's the objection about?

That is my objection. You think there is a conflict because you are not distinguishing between 'absolute' and 'relative' when you follow my definition. In the original post, I was just observing the situation we are in when we use rational assessment with incomplete data. I am interested to see if we can find ways to calibrate for such distortions. I will expand in future posts.

Finally, I wouldn't want to give you the impression that I am certain about the view of 'truth' I am presenting here. And I hope you are not sure of your assessments either. But this is where I currently am. This is my belief system.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T10:03:01.035Z · LW(p) · GW(p)

Finally, I wouldn't want to give you the impression that I am certain about the view of 'truth' I am presenting here.

I didn't have the impression.

Saying truth is about correspondence to reality is quite different than saying that it is about reality.

In Bayesianism a probability that a person associates with an event should be a reflection of the information available to the person. Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it's subjective.

In the frequentist idea of probability there's the notion that the probability is independent from the observer and the information that the observer has, but that assumption isn't there in the Bayesian notion of probability.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-12T10:59:05.493Z · LW(p) · GW(p)

Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it's subjective.

Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.

Indeed this seemed to be something people are aware of from what I gathered from the answer of MrMind and this one from Vaniver. Vaniver in particular pointed me towards an attempt to model the issue in order to mitigate it but it presupposes a computable universe and, most importantly, that the agent has logical omniscience and an infinite amount of time. This puts it out of the realm of practical rationality. A brief description of further attempts to mitigate the issues left me, for now, unconvinced.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T11:22:23.976Z · LW(p) · GW(p)

Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.

The idea of accuracy presupposes that you can compare a value to another reference.

If I say that Alice is 1,60m tall but she's 1,65m, that's inaccurate. If I however say, that there's a 5% chance that Alice is taller than 1,60m that's not inaccurate. My ability to predict height might be badly calibrated and I have a bad Briers score or Log score.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-12T11:54:42.535Z · LW(p) · GW(p)

I am using 'inaccurate' as equivalent to 'badly calibrated' here. Why do you feel it is important to make the distinction? I understand why it is important when dealing with clearly quantified data. But in every day life do you really mentally attempt to assign probability to all variables?

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T16:43:36.475Z · LW(p) · GW(p)

I am using 'inaccurate' as equivalent to 'badly calibrated' here.

To determine whether a person is well calibrated or isn't you have to look at multiple predictions of the person. It's an attribute heuristic for decision making.

On the other hand a single statement such as Alice is 1,60m might be inaccurate. Being inaccurate is a property of a statement and not just a property of how the statement was generated.

But in every day life do you really mentally attempt to assign probability to all variables?

Assigning probabilities to event takes effort. As such it's not something you can do for two-thousand statements in a day. To be able to assign probabilities it's also important to precisely define the belief.

If I take a belief like "All people who clicked on 'Going' will come to the event tonight", I can assign a probability. The exercise of assigning that probability makes me think more clearly about the likelihood of it happening.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-12T17:39:49.454Z · LW(p) · GW(p)

Thanks for the clarifications. One last question as I am sure all these will come out again and again as I am interacting with the community.

Can you give me a concrete example of a complex, real life problem or decision where you used the assignment of probabilities to your beliefs to an extend that you find satisfactory for making the decision. I am curious to see the mental process of really using this way of thinking. I assume it is a process happening through sound in the imagination and more specifically through language (the internal dialogue). Could you reproduce it for me in writing?

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T22:50:42.630Z · LW(p) · GW(p)

I applied for a job. There was uncertainty around whether or not I get the job. Having an accurate view of the probability of getting the job informs the decision of how important it is to spend additional effort.

I basically came up with a number and then ask myself whether I would be surprised if the event happens or doesn't happen.

I currently don't have a more systematic process.


I remember a conversation with a CFAR trainer. I said "I think X is a key skill". They responded with: "I think it is likely that X is a key skill but I don't know that it has to be a key skill.". We didn't put numbers on it but having probabilities in the background results in us being able to discuss our disagreement even through we both think "X is likely a key skill".

I had never someone outside of this community tell me "you are likely right but I don't see why I should believe that what you are saying is certain".

The kind of mindset that produces a statement like this is about taking different probabilities seriously.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-13T19:23:56.240Z · LW(p) · GW(p)

I had never someone outside of this community tell me "you are likely right but I don't see why I should believe that what you are saying is certain".

My thought is:

'I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.'

My doubt is:

'Maybe I am missing something that the use of probabilities adds to this realisation'

Hope to continue the discussion in the future.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-14T07:50:58.889Z · LW(p) · GW(p)

'I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.'

It's more than just a mindset. In this case the result were concrete discoursive practice. There are quite many people who profess to have a mindset that separates shades of gray. The amount of people who tell you voice disagreement when you tell them something they believe is likely to be true and that's important is much lower.

Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them? And stretch out the example?

Replies from: Erfeyah
comment by Erfeyah · 2017-02-14T12:45:48.045Z · LW(p) · GW(p)

Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them?

Do I need to express it in numbers? In my mind I follow and practice, among others, the saying: “Study the assumptions behind your actions. Then study the assumptions behind your assumptions.”

Having said that, I can not think of an example of applying that in a situation where I was in agreement. I am thinking that 'I would not be in agreement without a reason regarding a belief that I have examined' but I might be rationalising here. I will try to observe myself on that. Thanks!

Replies from: ChristianKl
comment by ChristianKl · 2017-02-15T09:40:16.491Z · LW(p) · GW(p)

I am thinking that 'I would not be in agreement without a reason regarding a belief that I have examined' but I might be rationalising here

We both had reasons for believing it to be true. On the other hand human believe things that are wrong. If you ask a Republican and a Democrat whether Trump is good for America they might have both reasons for their belief but they still disagree. That means for each of them there's a chance of them being wrong despite having reasons for their beliefs.

The reasons he had in this mind pointed to the belief being true but they didn't provide him the certainty that it's true.

It was a belief that was important enough for him to be right and not only have reasons for holding his belief.

The practice of putting numbers on a belief forces you to be precise about what you believe.

Let's say that you believe: "It's likely that Trump will get impeached." If Trump actually get's impeached you will tell yourself "I correctly predicted it, I was right". If he doesn't get impeached you are likely to think "When I said likely than it meant that there was a decent chance that he get's impeached but I didn't mean to say that the chance was more than 50%.

The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.

When Elon Musk started SpaceX he reportedly thought that it had a 10% chance of success. Many people would think of 10% of success as. It's highly unlikely that the company succeeds. Elon on the other hand thought that given the high stakes 10% chance of success is enough to found SpaceX.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-15T11:20:42.590Z · LW(p) · GW(p)

The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.

I will have to explore this further. At the moment the method seems to me to just give an illusion of precision which I am not sure is effective. I could say that I assign a 5% probability that the practice is useful to represent my belief. I will now keep interacting with the community and update my belief according to the evidence I see from people that are using it. Is this the right approach?

Replies from: ChristianKl
comment by ChristianKl · 2017-02-15T11:31:29.629Z · LW(p) · GW(p)

The word "useful" itself isn't precise and as such the precision of 5% might be more precise than warranted.

Otherwise having your number and then updating it according to what you see from people using it, is the Bayesian way.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-15T11:53:39.502Z · LW(p) · GW(p)

The word "useful" itself isn't precise

How would you express the belief?

comment by gjm · 2017-02-09T13:51:07.107Z · LW(p) · GW(p)

What if a rational assessment of inconclusive data weighs you towards the wrong direction. Wouldn't you then start doing worse than the dart throwing monkey?

Sure. In other words, if you get fed bad enough data then you have (so to speak) anti-knowledge. Surely this isn't surprising?

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T14:17:23.608Z · LW(p) · GW(p)

No, not really surprising.

I would just clarify though, that the data does not need to be 'bad' in the sense that it is false. We might have data that are accurate but misinterpret them by generalising to the larger context or mistakenly transposing them to a different one.

comment by MrMind · 2017-02-09T09:37:39.917Z · LW(p) · GW(p)

I don't think you're missing anything. Bayesian reasoning allows you to treat your data without introducing errors, but the results you come up with are a product of the available data and the prior model.
This is a point that is often overlooked: if you start with a completely false model, even with Bayesian reasoning the data will get you further away from the truth (case in point: someone who believes in an invisible dragon which has to invent more and more complicated explanation for the lack of evidence). Bayesian probability is just the way of reasoning that introduces the least amount of error.
To counter at least partially our fallibility, it's considered good practice to:

  • never put any assumption at precisely 0 or 1 probability;
  • leave always a reservoir of probability mass in your model to unknown unknows.

Other than that, findind the truth is a quest that needs creativity, ingenuity and a good dose of luck.

comment by Douglas_Knight · 2017-02-09T02:04:07.565Z · LW(p) · GW(p)

This is dealt with in science by not accepting explanations as truths until they are confirmed experimentally

A lot of people say that and go on to give a specific example, that Science did not accept general relativity on the basis that it explained the precession of Mercury, but rather on the novel prediction of solar lensing. But this is historically false. Scientists were much more impressed with the precession than the eclipse (even leaving aside Eddington's reliability).

comment by Vaniver · 2017-02-09T20:14:19.557Z · LW(p) · GW(p)

Note that experimental confirmation isn't really the thing here; experiments just give you data and the problem here is conceptual (the actual truth isn't in the hypothesis space).

Most Bayes is "small world" Bayes, where you have conceptual and logical omniscience, which is possible only because of how small the problem is. "Big world" Bayes has universal priors that give you that conceptual omniscience.

In order to make a real agent, you need a language of conceptual uncertainty, logical uncertainty, and naturalization. (Most of these frameworks are dualistic, in that they have a principled separation between agent and environment, but in fact real agents exist in their environments and use those environments to run themselves.)

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T20:33:12.735Z · LW(p) · GW(p)

"Big world" Bayes has universal priors that give you that conceptual omniscience.

I assume you are talking hypothetically and not really saying that we, in reality, have these priors? Is there an article about this 'small world' 'big world' distinction?

In order to make a real agent, you need a language of conceptual uncertainty, logical uncertainty, and naturalization. (Most of these frameworks are dualistic, in that they have a principled separation between agent and environment, but in fact real agents exist in their environments and use those environments to run themselves.)

This went completely over my head. Why did you bring agents in the conversation?

Replies from: Vaniver
comment by Vaniver · 2017-02-09T22:08:02.604Z · LW(p) · GW(p)

I assume you are talking hypothetically and not really saying that we, in reality, have these priors?

I had in mind Solomonoff Induction.

Is there an article about this 'small world' 'big world' distinction?

Here's the last time that came up; I think it's mostly in margins rather than an article on its own.

This went completely over my head. Why did you bring agents in the conversation?

Ah, because when talking about the how to model problems (which I think Bayesian rationality is an example of), agents are the things that do that.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T23:40:18.874Z · LW(p) · GW(p)

Ok that makes sense. These approaches are trying to add considerations such as mine into the model. Not sure I see how that can solve the issue of "the truth missing from the hypothesis space". Or how accurate modelling of the agents can be achieved at our current level of understanding. Examples of real world applications instead of abstract formulations would be really helpful but I will study the article on Solomonoff induction.

Replies from: Vaniver
comment by Vaniver · 2017-02-10T00:10:45.602Z · LW(p) · GW(p)

Not sure I see how that can solve the issue of "the truth missing from the hypothesis space".

Solomonoff Induction contains every possible (computable) hypothesis; so long as you're in a computable universe (and have logical omniscience), the truth is in your hypothesis space.

But this is sort of the trivial solution, because while it's guaranteed to have the right answer it had to bring in a truly staggering number of wrong answers to get it. It looks like what people do is notice when their models are being surprisingly bad, and then explicitly attempt to generate alternative models to expand their hypothesis space.

(You can actually do this in a principled statistical way; you can track, for example, whether or not you would have converged to the right answer by now if the true answer were in your hypothesis space, and call for a halt when it becomes sufficiently unlikely.)

Most of the immediate examples that jump to mind are mathematical, but that probably doesn't count as concrete. If you have a doctor trying to treat patients, they might suspect that if they actually had the right set of possible conditions, they would be able to apply a short flowchart to determine the correct treatment, apply it, and then the issues would be resolved. And so when building that flowchart (i.e. the hypothesis space of what conditions the patient might have), they'll notice when they find too many patients who aren't getting better, or when it's surprisingly difficult to classify patients.

If people with disease A cough and don't have headaches, and people with disease B have headaches and don't cough, on observing a patient who both coughs and has a headache the doctor might think "hmm, I probably need to make a new cluster" instead of "Ah, someone with both A and B."

Replies from: Erfeyah
comment by Erfeyah · 2017-02-10T01:08:47.732Z · LW(p) · GW(p)

I read the article and I have to say that the approach is fascinating in its scale and vision. And I can see how it might lead to interesting applications in computer science. But, in its current state, as an algorithm for a human mind.. I have to admit that I can not justify investing the time for even attempting to apply it.

Thank you for all the info! :)

comment by MaryCh · 2017-02-09T10:20:04.349Z · LW(p) · GW(p)

Seems like under that system, some headaches would just go away on their own, without symptoms of cold appearing later, which would mean the probability of cancer goes up. The patient panics, gets checked for cancer (since a recurring headache would hardly be the only symptom), no such disease is found, so there apparently has to be a new category for non-cancer, non-cold headaches.

So usually, the evidence soup has pieces that don't conform to a theory, and if the stakes are high enough, people will go looking for them and maybe use the BR for it.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T11:30:18.156Z · LW(p) · GW(p)

Yes, the example is a bit misleading. My purpose is to observe rationality in a field where data is unavailable, scarce or inconclusive. So use the example as a loose analogy for the purpose of communicating the point.

Replies from: MaryCh
comment by MaryCh · 2017-02-09T13:30:14.693Z · LW(p) · GW(p)

[I think that] where data are unavailable, you cannot really constrain your expectations of them. You can build models, even, perhaps, assign weights within them, but there is no basis to choose between one model and another, and every number of models you find 'satisfying' remains arbitrary.

(This is why I don't argue about religion, for example.)

comment by AmagicalFishy · 2017-02-08T21:08:17.032Z · LW(p) · GW(p)

I don't think you're missing anything, no.

comment by MaryCh · 2017-02-11T16:38:51.926Z · LW(p) · GW(p)

There are fields of science where different research teams use not-quite-the-same units of measurement. For example, in phytohormonology, the amount of a hormone in plant tissue can be expressed in (nano)grams per grams of dry weight or fresh weight, and people who compare values expressed in different ways cringe because they understand how inexact it really is. There simply is no reliable scale that would allow recalculation, especially for different species etc. (not to mention all the other ways in which drawing conclusions from only the amount of such a substance somewhere in the organism is - hard.) Yet in other fields, people negotiate a common measure, and get more 'research cohesion' as a result.

Has there been any research into how using a common measure allows for more united science? I don't mean miles versus kilometres, I mean cases where there is no coefficient between two valid units. And then there are cases like the coastline paradox (there is a coefficient, but the end results differ.) And cases like the "huge differences in the amount of infection, depending on how you stain your samples, and yes, different people stain them differently and nobody thinks it a problem" (but unlike the phytohormonology example, the coefficient could be found and it would be of some use). And maybe other kinds of cases, of which I haven't thought - enough to build a rough classification.

I think it must be a large topic, but I can't think of how to Google it. Something about 'measure'?.. Can anybody help? Thank you.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T10:07:39.851Z · LW(p) · GW(p)

Standardization is the core word. Institutions like ISO exist to create common standards. Jounals can then force scientists to actually follow standards.

Controlled vocabularies and applied ontology seem other key words.

How useful standardization happens to be depends a lot on the quality of the standard. The DSM-V for example seems to be a standard that holds science back and as a result there are calls for funding research that tries to use new standards.

Replies from: MaryCh
comment by MaryCh · 2017-02-12T14:47:48.441Z · LW(p) · GW(p)

Thank you! I'll look at/read "Applied ontology: an introduction" (ed. by Munn and Smith) - the results are rather varied and have their own developed terminology, and this one looks as good place to start as any.

Edit to add: tentatively, the "automated information systems" angle might not be what I'm looking for:(

Replies from: ChristianKl
comment by ChristianKl · 2017-02-12T23:04:39.726Z · LW(p) · GW(p)

Automated information system require fixed vocabulary.

If people who observe rats have a different idea about what a leg happens to be then the people who study humans (the leg is the part between the foot and the knee) there are problems with translating knowledge.

Humans might be smart enough to do the translation but computers won't. As a result there's interest in standardization. Bioinformatics needs the standardization and that's where Barry Smith comes from. Bioinformatics has the interests in standardization because automated information systems don't work without it.


I remember a story, which I think cames from People Works (a book about Google's HR department). It made the point that it's not trivial to have a charged definition in a company of what it means to have 10 employees. The people who pay the wages might count 6 full time employees plus 4 half-time employees as 8 employees. When it comes to paying health insurance, it's 10 employees.

The HR department might count prospective employees as an employee the moment the employee signs the offer while another department waits till their starting date.

The fact that Google has a charged definition of employee allowed them to do much better statistics.

Replies from: MaryCh
comment by MaryCh · 2017-02-13T06:17:44.847Z · LW(p) · GW(p)

Yes, I appreciate the effect of automation on standardization, it is really a great thing. I just have the impression that differences stemming from the deep, very much method-shaped variability of research - like 'radiation' in the evolutionary sense - might be only superficially addressed using only the standardization-as-I-have-read-of-it. (I'm still reading, and expect this to change.)

I'm starting with an image of 'variable and its measure can be only tenuously linked (like 'length' and 'meter) to pretty much baked together (like the phytohormone example)'. This image might itself be just wrong.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-13T09:41:42.016Z · LW(p) · GW(p)

Meter is not our only measure of length. We also have astronomical units to measure length. In school they taught us that they were different units and that we can speak with more precision about the distance between two stars if we talk in astronomical units.

For a long time there were also a bunch of interesting questions such as whether it makes sense to say that the norm meter in Paris is 1-meter long and whether it stays exactly 1-meter long is it's surface oxidates a bit.

Metal changes it's length at different temperature. That means you need a definition of temperature to define the length of the meter if you define it over the norm meter.

Newton thought that there's was a fixed "temperature of blood". Fahrenheit used "body temperature" as a measuring stick for a specific temperature.

It took a lot of science to find the freezing point and the boiling point of water as the perfect way to norm temperature. If you shape the vessel the right way, it's possible to boil water at 102 degree C, so they needed to specify the right conditions.

Replies from: MaryCh
comment by MaryCh · 2017-02-13T10:19:24.493Z · LW(p) · GW(p)

I either didn't know or hadn't thought in the context about most of what you say here, thank you.

Yet this (the exact length of a meter) is more-or-less settled, in the sense that very many people use it without significant loss of what they want to convey. This is kind of exactly the thing I'd like to learn about - how unit-variable relationships evolve and come to some 'resting position'. How people first come to think about the matter of a subject, than about the ways to describe it, and finally about the number of a common 'piece' used to measure it.

Replies from: ChristianKl
comment by ChristianKl · 2017-02-13T11:18:02.536Z · LW(p) · GW(p)

I think the Applied Ontology book is worth reading as it touches a lot of the practical concerns that come with the need for standardization due to automated knowledge processing. Even if you aren't interested in automated knowledge processing it still useful.

Inventing Temperature: Measurement and Scientific Progress by Hasok Chang is a good case study for how our measure of temperature evolved. Temperature is a good example because conceptualizing it is harder than conceptualizing length. In the middle ages people had their measures for length but they didn't have one for temperature.

The definition of the meter over the wavelength of light instead of over the norm meter was settled in 1960 but the amount of people for whom there were practical concerns was relatively little. Interestingly we have at the moment a proposed change to the SI system that redefines the kilogram: https://en.wikipedia.org/wiki/Proposed_redefinition_of_SI_base_units

It changes the uncertainity that we have over a few constants. Beforehand we had an exact definition of the kilogram and afterwards we only know 8 digits of accuracy. On the other hand we get more accuracy for a bunch of other measurements. It might be worth reading a bit into the debate if you care about how standards are set.

comment by Pimgd · 2017-02-10T13:54:49.280Z · LW(p) · GW(p)

How does a rational actor resolve the emperor's clothes?

Story link: http://www.andersen.sdu.dk/vaerk/hersholt/TheEmperorsNewClothes_e.html

Specifically, insert ourselves into every step of the process.

1) You're the emperor. Two tailors come to you saying they can make you a suit that cannot be seen by those that are stupid and/or unfit for their current position.

Answer to this, I think, is: You don't believe this magical stuff, see it for the scam that it is and tell them to bugger off.


2) You as the emperor, somehow agree to this. They take your measurements and start weaving, then start demanding all sorts of resources (cloth and silks).

Answer to this is probably: You give them this, because as the story goes, you really want fancy clothes. ... Besides, if you say no, they'll say they can't make clothes without cloth. Now what? It's not an unreasonable request. Maybe you can complain about the quantities, ask for an itemized invoice, but, eh.


3) You've sent your minister, which you trust dearly, has been at your side for years, real standup guy, and he says they're making the most beautiful clothing. You've sent another official, and he says it's absolutely magnificent. You go and visit in person, and they point at empty looms, whilst saying "see, aren't these the most beautiful clothes ever?" Your guards stay silent.

So here's one of the places where I'm interested in the answer, because this is the point of personal doubt as the emperor. You can either, as in the story, say "Oh yes, such wonderful clothes!" and internally go "oh crap am I unfit", or I think you could go "What clothes are you talking about, those are empty looms!" But the evidence you have is, there's 2 people who you don't know that are saying "the clothes exist and they have a property that if you cannot see them you are dumb and/or unfit", and there's 2 people who you really really trust that are saying "look at these fine fine clothes, they really do exist".

My answer in this case would be to station the guards in the room, leave the room with the ministers, and ask them individually "Okay, now let's be honest. Did you really see those clothes?" If any of them say no, I'd have the "tailors" executed. But if they both say yes and start expressing worry for me and all that then I don't know what to believe.


4) You're a citizen of the country. The emperor is having a parade to showcase his new clothes! They're supposedly magical clothes, which cannot be seen by those who are unfit and/or dumb. It's a bit chilly. Everyone's talking about the fancy clothes, and when the emperor comes around the corner, you can see him: He's naked, but otherwise fine. Behind him are several noblemen, pretending to hold the drape of the clothes. Your friend looks at you and says, "Look, aren't those the most fancy clothes?"

This case too, is hard for me. I mean, it depends on your standing in society for how much you stand to lose, but in a medieval society, if you're a farmhand? I'd say "but he's naked!". Farmhands aren't particularily clever (I might be misguided), but they haven't got a whole lot to lose. But if you're a craftsman, somehow who has a shop? Yeah, that'd be a big reputation hit, if the whole town thought you were unfit to make the things you make.


My question, for each case - what's the rational belief to hold? The main beliefs you can hold that I can see are "The clothes do not exist, but everyone is faking it, and it should stop", "The clothes do not exist, but everyone is faking it, and I should fake it too" and "I am unfit and/or dumb and I better fake seeing those clothes lest I lose my station".

My other question - As the emperor, you could go all science on the clothes. "I can see the clothes just fine, but why do they not cast shadows?" "These clothes are very light", in fact, when weighed, they don't weigh anything, they don't create shadows, they let heat through, they don't hold water (it seeps straight through as if the clothes weren't there)... That'd be one way to quickly gather evidence. I'd also express worry - if someone can't see the clothes, won't they see me naked?

Anyway, my other question - how would you gather extra evidence as a citizen?

Replies from: Viliam, MockTurtle, ChristianKl
comment by Viliam · 2017-02-10T15:19:43.194Z · LW(p) · GW(p)

But the evidence you have is, there's 2 people who you don't know that are saying "the clothes exist and they have a property that if you cannot see them you are dumb and/or unfit", and there's 2 people who you really really trust that are saying "look at these fine fine clothes, they really do exist".

Collect more evidence.

If possible, find a person who never heard about the supposed properties of the clothes, and ask them to describe them to you. If they can't, maybe they are stupid, but then find another one and... uhm, if everyone who didn't hear about the supposed properties of the clothes is stupid, that's suspicious.

Unexpectedly invite a few painters, put them in different corners of the room, and ask them to paint you in the clothes.

Alternatively, test your ministers. First, meet them with the clothes; next, without them. If they see the clothes both times...

Put the clothes in a bag. Add a few empty bags. Ask your ministers which bag contains the clothes. If all of them failed, ask the tailors.

comment by MockTurtle · 2017-02-14T16:43:46.969Z · LW(p) · GW(p)

Interesting questions to think about. Seeing if everyone independently describes the clothes the same way (as suggested by others) might work, unless the information is leaked. Personally, my mind went straight to the physics of the thing, 'going all science on it' as you say - as emperor, I'd claim that the clothes should have some minimum strength, lest I rip them the moment I put them on. If a piece of the fabric, stretched by the two tailors, can at least support the weight of my hand (or some other light object if you're not too paranoid about the tailor's abilities as illusionists), then it should be suitable.

Then, when your hand (or whatever) goes straight through, either they'll admit that the clothes aren't real, or they'll come up with some excuse about the cloth being so fine that it ripped or things go straight through, at which point you can say that these clothes are useless to you if they'll rip at the slightest movement or somehow phase through flesh, etc.

Incidentally, that's one of my approaches to other things invisible to me that others believe in. Does it have practical uses or create a physical effect in the world? If not, then even if it's really there, there's not much point in acknowledging it...

comment by ChristianKl · 2017-02-10T14:35:59.205Z · LW(p) · GW(p)

The irony of the situation is that some fancy closes that are today worn in Milan leave a large part of the person naked.

Anyway, my other question - how would you gather extra evidence as a citizen?

As different people for the color of the clothes and for more details. If the people really can see the clothes they should be able to describe the clothes in the same way.

If there already common knowledge about the color of the clothes or details then it would be required to see the clothes in a new context. How do the clothes look like when they get wet? If two people agree how the clothes look under a completely new context it's more likely that they don't just tell you the answer they learned by heart.

comment by James_Miller · 2017-02-09T23:49:44.427Z · LW(p) · GW(p)

If I email someone non-famous to be on my podcast and they don't respond should I take that as a "no" or as a "didn't get the message try again".

Replies from: gjm
comment by gjm · 2017-02-10T00:49:04.801Z · LW(p) · GW(p)

I make no claim to know what you should do, but what I would do in that situation is: wait at least a month; then email them again saying: no obligation to reply or anything, just wanted to make sure emails aren't going astray. If still no reply, assume that either they aren't interested or they're spam-binning your emails and you'll never reach them by email anyway.

comment by hodgestar · 2017-02-09T19:04:55.670Z · LW(p) · GW(p)

Less Wrong has a number of participants who endorse the idea of assigning probability values to beliefs. Less Wrong also seems to have a number of participants who broadly fall into the "New Atheist" group, many of the members of which insist that there is an important semantic distinction to be made between "lack of belief in God" and "belief that God does not exist."

I'm not sure how to translate this distinction into probabilistic terms, assuming it is possible to do so-- it is a basic theorem in standard probability theory (e.g. starting from the Kolmogorov axioms) that P(X) + P(not(X)) = 1 for any event X. In particular, if you take "lack of belief in God" to mean that you assign a value very close to 0 for P("God exists"), then you must assign a value very close to 1 for P(not("God exists")). I would have thought (perhaps naively) that not("God exists") and "God does not exist" are equivalent, and that what it means to say that you believe in some proposition X is that you assign it a probability that is close to 1 (though not exactly 1, if you're following the advice to never assign probabilities of exactly 0 or 1 to anything). This would imply that that a lack of belief in God implies a belief that God does not exist.

Am I misunderstanding something about translating these statements into probabilistic language? Or am I just wrong to think that there are people who simultaneously endorse both assigning probabilities to beliefs and the distinction between "lack of belief that God exists" and "belief that God does not exist?"

Replies from: Erfeyah, Vaniver, ChristianKl, MrMind, gjm
comment by Erfeyah · 2017-02-09T20:46:46.695Z · LW(p) · GW(p)

Shouldn't a lack of belief in god imply:

P(not("God exists")) = 0.5

P("God exists") = 0.5

(I am completely ignoring the very important part of defining God in the sentence as I take the question to be asking of a way to express 'not knowing' in probabilistic terms. This can be applied to any subject really.)

Replies from: hairyfigment, Lumifer
comment by hairyfigment · 2017-02-09T23:06:10.616Z · LW(p) · GW(p)

I am completely ignoring the very important part of defining God

That is indeed the chief problem here. I'm assuming you're talking about the prior probability which we have before looking at the evidence.

comment by Lumifer · 2017-02-09T20:58:27.346Z · LW(p) · GW(p)

Shouldn't a lack of belief in god imply

No. Why would it?

"I don't know" is a perfectly valid answer. Sometimes it's called Knightian uncertainty or Rumsfeldian "unknown unknowns".

Replies from: Erfeyah
comment by Erfeyah · 2017-02-09T21:08:20.023Z · LW(p) · GW(p)

I agree that "I don't know" is a better answer as there is no reason to talk in rational jargon in a case like this. Especially when we haven't even defined our terms. But could you explain to me why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?

Replies from: Lumifer
comment by Lumifer · 2017-02-09T21:19:46.705Z · LW(p) · GW(p)

why assigning 0.5 probabilities in the two opposites (assuming the question is clear and binary) does not make sense for expressing ignorance?

Because you lose the capability to distinguish between the "I know the probabilities involved and they are 50% for X and 50% for Y" and "I don't know".

Look at the distributions of your probability estimates. For the "I don't know" case it's a uniform distribution on the 0 to 1 range. For the "I know it's 50%" it's a narrow spike at 0.5. These are very different things.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-10T00:03:09.444Z · LW(p) · GW(p)

Ahhh, this is confusing me. I intuitively feel a 50-50 chance implies a uniform distribution. But what you are saying about the distribution being a spike for 0.5 makes total sense. Well, I guess I have a bit of studying to do...

Replies from: gjm, Lumifer
comment by gjm · 2017-02-10T00:57:57.484Z · LW(p) · GW(p)

Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.

The difference between "The probability of X is definitely 0.5" and "The probability of X is somewhere between 0 and 1, and I have no idea at all where" lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.

Sometimes this is described in terms of your probability estimates for your probability estimates. That's appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it's tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there's nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I'm not sure the "probabilities of probabilities" perspective is particularly helpful there.

Replies from: Erfeyah
comment by Erfeyah · 2017-02-10T10:06:37.129Z · LW(p) · GW(p)

Thanks for the detailed explanation. It helps!

comment by Lumifer · 2017-02-10T03:34:16.155Z · LW(p) · GW(p)

I intuitively feel a 50-50 chance implies a uniform distribution

Well, imagine a bet on a fair coin flip. That's a 50-50 chance, right? And yet there is no uniform distribution in sight.

Replies from: niceguyanon, Erfeyah
comment by niceguyanon · 2017-02-10T19:20:22.926Z · LW(p) · GW(p)

So if we can distinguish between

"I know the probabilities involved and they are 50% for X and 50% for Y" and "I don't know".

Could we further distinguish between

a uniform distribution on the 0 to 1 range and "I don't know"?

Let's say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and "I don't know" means you can't predict better than randomly guessing. So saying p is 50% doesn't matter because it doesn't beat random.

But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like "I don't know" doesn't mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.

You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn't really help that much more.

If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don't know in this scenario doesn't feel the same as I don't know in the first scenario. I am probably confused.

Replies from: Lumifer
comment by Lumifer · 2017-02-10T21:48:27.998Z · LW(p) · GW(p)

but it seems like "I don't know" doesn't mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.

But you've changed things :-) In your situation you know a very important thing: that the probability p is the same for both throws. That is useful information which allows you to do some probability math (specifically compare 1 - p(1-p) and 1 - p^2).

But let's say you don't toss the same coin twice, but you toss two different coins. Does guessing (H,T) help now?

Replies from: niceguyanon
comment by niceguyanon · 2017-02-21T17:07:03.473Z · LW(p) · GW(p)

I understand now. Thanks!

comment by Erfeyah · 2017-02-10T10:03:28.470Z · LW(p) · GW(p)

You are obviously right! This is helpful :)

Now, just to make sure I got it, does this make sense: the question of gods existence (assuming the term was perfectly defined) is a yes/no question but you are conceptualising the probability that a yes or a no is true. That is why you are using a uniform distribution in a question with a binary answer. It is not representing the answer but your current confidence. Right?

Replies from: Lumifer
comment by Lumifer · 2017-02-10T15:53:19.838Z · LW(p) · GW(p)

Skipping a complicated discussion about many meanings of "probability", yes.

Think about it this way. Someone gives you a box and says that if you press a button, the box will show you either a dragon head or a dragon tail. That's all the information you have.

What's the probability of the box showing you a head if you press the button? You don't know. This means you need an estimate. If you're forced to produce a single-number estimate (a "point estimate") it will be 50%. However if you can produce this estimate as a distribution, it will be uniform from 0 to 1. Basically, you are very unsure about your estimate.

Now, let's say you had this box for a while and pressed the button a couple thousands of times. Your tally is 1017 heads and 983 tails. What is your point estimate now? More or less the same, rounding to 50%. But the distribution is very different now. You are much more confident about your estimate.

Your probability estimate is basically a forecast of what do you think will happen when you press the button. Like with any forecast, there is a confidence interval around it. It can be wide or it can be narrow.

comment by Vaniver · 2017-02-09T20:00:58.201Z · LW(p) · GW(p)

You're right about the probabilistic statements, with a potentially tangential elaboration. There are nonsense sentences--not contradictions ("A and not A") but things that fail to parse ("A and")--and it doesn't make sense to assign probabilities to those. One might claim that "God exists" is a nonsense sentence in that way, but I think most New Atheists don't take that approach.

The distinction that people are drawing is basically which framing should have the benefit of the doubt, since not believing a new statement is the default. This is much more important for social rationality / human psychology than it is for Bayesianism, where you just assign a prior and then start calculating likelihood ratios.

comment by ChristianKl · 2017-02-10T15:13:32.736Z · LW(p) · GW(p)

From my perspective, a belief needs to be about empiric facts to have a probability attached to it. I need to be able to clearly describe how the belief could be tested in principle.

In addition to beliefs about empiric facts there are also beliefs like "Nobody loves me." that aren't about specific empiric facts but that still matter a great deal.

comment by MrMind · 2017-02-10T09:32:50.265Z · LW(p) · GW(p)

I cannot speak for other atheists, but as far as I'm concerned I agree with you.
Since we have a hard time defining even a human being, I accept that "God" cannot be clearly defined in any model, but I accept that there are narrations that points to some being of divine nature, and I accept that as a valid 'reference' to God(s). To those, I give a very low probability, with very little Knightian uncertainty (meaning that I also give very little probability to future evidence that would raise this probability, and high probability to evidence that will lower this value). For that account of the divine, I consider myself a full fledged atheist.
There are other narrations though, and I've heard of definitions that basically reduce to "the law of physics", to which I give obviously very high probability with very high meta-certainty.
There might be definitions or narrations that are in the middle, though. On that account, I cannot say precisely what my probabilities are, and thus would be appropriate to say that I lack a belief in this kind of god, more than a definite belief in the non-existence.

comment by gjm · 2017-02-09T20:40:06.215Z · LW(p) · GW(p)

It seems to me that someone could quite consistently hold the following position:

"Atheist" means "lacking positive belief in any god or gods". You can be an atheist without thinking the existence of gods is vanishingly improbable, or indeed without giving any thought at all to probabilities. I, as it happens, do prefer to think in probabilities when possible. Exactly what I think about the existence of God depends a great deal on how you define God, and it might be anywhere from "vanishingly unlikely" to "somewhat unlikely", or in many cases "I can't answer that question because it's not clear enough what it means". But, whatever way you pose the question, I don't positively believe in any sort of god, and it's therefore appropriate to call me an atheist.

comment by madhatter · 2017-02-09T01:35:21.215Z · LW(p) · GW(p)

Thanks for this topic! Stupid questions are my specialty, for better or worse.

1) Isn't cryonics extremely selfish? I mean, couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?

2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?

3) Is the study linking nut consumption to longevity found in the link below convincing?

http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094

And if so, is it worth a lot of effort promoting nut consumption in moderation?

Replies from: Thomas, Viliam, MrMind
comment by Thomas · 2017-02-09T08:45:48.752Z · LW(p) · GW(p)

couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?

Here comes another "stupid question" from this one.

Couldn't the money spent on AI safety research be better spend on, say, AI research?

Replies from: Vaniver, MrMind, Lumifer
comment by Vaniver · 2017-02-09T20:07:17.439Z · LW(p) · GW(p)

Couldn't the money spent on AI safety research be better spend on, say, AI research?

There's something like 100 times as much funding for AI research as there is for AI safety research. In general, it seems like it would be weird to have only 1% of the effort in a project spent on making sure the project is doing the thing that it should be doing.

For this specific question, I like Stuart Russell's approach:

My proposal is that we should stop doing AI in its simple definition of just improving the decision-making capabilities of systems. […] With civil engineering, we don’t call it “building bridges that don’t fall down” — we just call it “building bridges.” Of course we don’t want them to fall down. And we should think the same way about AI: of course AI systems should be designed so that their actions are well-aligned with what human beings want. But it’s a difficult unsolved problem that hasn’t been part of the research agenda up to now.

comment by MrMind · 2017-02-09T09:26:14.811Z · LW(p) · GW(p)

Well, the whole point of this forum is to convince someone that the answer is most definitely not.

Replies from: gjm, Thomas
comment by gjm · 2017-02-09T11:58:53.684Z · LW(p) · GW(p)

the whole point of this forum

It really isn't. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then "this forum" should be very much in the business of advocating against AI safety research.

Replies from: VincentYu, MrMind
comment by VincentYu · 2017-02-09T14:03:18.707Z · LW(p) · GW(p)

In support of your point, MIRI itself changed (in the opposite direction) from its former stance on AI research.

You've been around long enough to know this, but for others: The former ambition of MIRI in the early 2000s—back when it was called the SIAI—was to create artificial superintelligence, but that ambition changed to ensuring AI friendliness after considering the "terrible consequences [now] feared by the likes of MIRI".

In the words of Zack_M_Davis 6 years ago:

(Disclaimer: I don't speak for SingInst, nor am I presently affiliated with them.)

But recall that the old name was "Singularity Institute for Artificial Intelligence," chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and "Singularity Institute about Artificial Intelligence" might seem awkward.

I seem to remember someone saying back in 2008 that the organization should rebrand as the "Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration," but obviously that was only a joke.

I've always thought it's a shame they picked the name MIRI over SIFAAIDWSBBIUDC.

Replies from: Lumifer
comment by Lumifer · 2017-02-09T15:39:24.318Z · LW(p) · GW(p)

Or maybe because SIAI realized their ability to actually create an AI is non-existent

Replies from: MrMind
comment by MrMind · 2017-02-10T09:21:19.209Z · LW(p) · GW(p)

Ha! It's wonderful news that you can take it off!
For me you're the closest human (?) correlate to the man with the hat from XKCD, and I mean that as a compliment.

Replies from: Lumifer
comment by Lumifer · 2017-02-10T15:42:40.651Z · LW(p) · GW(p)

I take it as such :-)

You do mean the black hat guy, right? (there is also a white hat guy who doesn't pop up as frequently).

Replies from: MrMind
comment by MrMind · 2017-02-10T17:16:22.238Z · LW(p) · GW(p)

Yes, the black hatter. I totally forgot about the white hat guy...

comment by MrMind · 2017-02-10T09:16:00.371Z · LW(p) · GW(p)

You're right, but.
The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like "intelligence would surely bring about morality" and things like that.
The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven't found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition.
I do agree that if we found a strong objections we should change perspective, but we still haven't and indeed we are seeing more and more worrisome examples.

comment by Thomas · 2017-02-09T09:33:26.892Z · LW(p) · GW(p)

I know that. But the whole point of this thread is to ask stupid questions, isn't it?

And sometimes apparently the stupidest question, isn't stupid after all.

comment by Lumifer · 2017-02-09T15:37:49.857Z · LW(p) · GW(p)

Yes.

comment by Viliam · 2017-02-09T10:14:26.428Z · LW(p) · GW(p)

Isn't cryonics extremely selfish?

If we are talking about "extremes", what is the base set here: people's usual spending habits? Because I don't think cryonics is more selfish than e.g. buying an expensive car.

comment by MrMind · 2017-02-09T09:23:52.637Z · LW(p) · GW(p)

I mean, couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?

Well, 'better' here does all the work. It depends on your model and ethics: for example if you think that resuscitation is probably nearer then full AGI, then it's better to be frozen.

2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?

This question I couldn't parse correctly. A nuclear war is improbable to wipe out humanity in its entirety, whereby a lot of people is th exact opposite of extinction, so...?

3) Is the study linking nut consumption to longevity found in the link below convincing?

This is far from a stupid question. The sample sizes are at least large, but it has the usual problem of using p-values, which are notoriously very fragile. It would require someone acquainted with statistics to judge better the thing, if it can be done at all.

comment by Bound_up · 2017-02-13T23:52:49.993Z · LW(p) · GW(p)

I'm looking for a link I saw on Slate Star Codex. It was poetry written by a woman who took drugs every day for a year (something like that). Anyone know where I might find it?

Replies from: michaelkeenan
comment by michaelkeenan · 2017-02-28T17:15:24.097Z · LW(p) · GW(p)

That sounds like Aella, who wrote about taking acid every week for a year. Here's her reddit post about it; it includes some art she made, and one poem.

comment by ChristianKl · 2017-02-12T10:51:48.508Z · LW(p) · GW(p)

How many jobs that were done 50 years ago still exist in roughly the same form?

comment by VincentYu · 2017-02-10T06:25:09.029Z · LW(p) · GW(p)

Why is downvoting disabled, for how long has it been like this, and when will it be back?

Replies from: Viliam, Erfeyah
comment by Viliam · 2017-02-10T10:53:49.836Z · LW(p) · GW(p)

The original purpose of downvoting was to allow community moderation. Here, "moderation" means two things: (1) Giving higher visibility to high-quality content. This functionality we still have, it's the upvotes. (2) Removing low-quality content. Comments with karma below -5 and their whole subthreads are collapsed by default. This is especially important when some newcomers start spamming LW with a lot of low-quality comments. It happened more frequently in the past when LW was more popular.

And the "community" aspect means that these decisions about what to show prominently and what to hide are done by the local "hive mind", i.e. everyone, more precisely anyone above some amount of karma. This is good for several reasons: "wisdom of the crowds", preventing a few people from getting disproportional power, but most practically because moderators are busy and unable to review everything.

Why was it disabled:

The previous political debates on LW attracted one very persistent and very "mind-killed" person, known as Eugine. This guy made it his personal mission to promote neoreactionary politics on LW, and to harass away everyone who disagrees (because people who disagree with him or with neoreaction are by definition irrational people and don't belong here). To achieve this, he abused the downvoting system.

The first form of abuse was punishing everyone who disagreed with him by going through their comment history and downvoting all their previous comments. That means, one day you wrote a comment he didn't like, and the next day you lost hundreds of karma points. And afterwards, any comment you wrote, immediately had one downvote.

This was against how the karma system was supposed to be used (you were supposed to vote on specific comments, not users), and pretty much ruined our important feedback system. Eugine was asked to stop doing this, he didn't give a fuck. So his account was banned, but he created another one, and then another. So it became a game of whack-a-mole, where Eugine created hundreds of accounts, and moderators tried to find and remove them. Even worse, with multiple accounts Eugine started multiple voting, which means that if he disliked a comment, he downvoted it from dozen accounts, immeditely moving its karma into negative numbers. He typically downvoted all comments that disagreed with neoreactionary politics, or which mentioned Eugine.

LessWrong code is a clone of Reddit; it is not an elegant code, and the database is even less elegant. A few professional web developers tried to implement a few changes; most of them left crying, and the few changes that were successfully implemented took a lot of time. Fighting with Eugine was a huge drain of resources, and one of the main reasons why currently LW is "dead".

What now:

The short-term solution was to disable downvotes, thus removing from Eugine his ability to censor comments he doesn't like. Yeah, it has a few negative side-effects.

A long-term solution is to move the whole website to a completely different codebase, which will be easier to maintain. This is a work in progress. Respectful of the planning fallacy I will not give any estimates, except "it will be done when it will be done". On the new software, downvoting (or some other method of removing low-quality content) will presumably exist.

Replies from: VincentYu
comment by VincentYu · 2017-02-10T15:05:06.031Z · LW(p) · GW(p)

Thanks for writing such a comprehensive explanation!

comment by Erfeyah · 2017-02-10T10:10:14.299Z · LW(p) · GW(p)

I am very new here but my impression from reading around is that people were taking advantage of the system by creating multiple accounts and downvoting comments that opposed them in order to appear to be right. I am not sure though.

Replies from: Viliam
comment by Viliam · 2017-02-10T15:21:14.702Z · LW(p) · GW(p)

Correct, with the addition that it was only one person. Very persistent, though... keeps doing this for years.