Posts

How do Bayesians tell what does and doesn't count as evidence (which, e.g., hypotheses may render more or less probable if true)? Is it possible for something to fuzzily-be evidence? 2021-11-30T12:52:37.929Z

Comments

Comment by LVSN on Taboo Truth · 2021-12-30T18:56:22.902Z · LW · GW

Lies which coincide with the enforcement of taboos, or lies which misrepresent the character of people thereby destroying ideal speech situations, are never noble lies. This is not a hard case to make.

Comment by LVSN on [deleted post] 2021-12-11T03:14:55.144Z

I get the sense that expeditiousness could be the 14th rationalist virtue (after the 13th, paranoia).

https://www.thesaurus.com/browse/expeditious

Comment by LVSN on How do Bayesians tell what does and doesn't count as evidence (which, e.g., hypotheses may render more or less probable if true)? Is it possible for something to fuzzily-be evidence? · 2021-12-06T16:30:36.966Z · LW · GW

Seems like "evidence" is a terrible word for the concept! "Data" is better, though "sensory data" is even less misleading while a bit clunkier, and "the set of propositions safely taken for granted" is the least misleading and the clunkiest.

Additionally: imagine the evidence appeared very quickly, and was about an emotionally charged subject. People might misremember the evidence as being one thing when it was actually something similar, but still different, and perhaps critically different. Shouldn't it be regarded an extremely important Bayesian skill to correctly interpret and remember your experiences? Since they will be used to measure the correct amount of confidence in explanations.

Comment by LVSN on [deleted post] 2021-12-05T16:35:38.278Z

There's a kind of manipulation which gets little discussion due to its invisibility.

Most people start out with some uncertainty about every form of social interaction. 
Then someone associates a social interaction with emotionally loaded language and undesirability, and then most people, without the benefit of appreciating that they can think and interpret autonomously in the opposite direction with some degree of plausibility, just go with the flow of what other people are saying is undesirable. 
Then, if someone tries to fight this gradual narrowing of tolerance with arguments which explicitly appeal to people's conscience by trying to consider all the plausible perspectives on the matter, which is far more respectful to their intellectual autonomy than letting only one view dominate, it gets processed as manipulation, guilt-tripping, and antisociality because they are coming from a position which is unpopular due to the prior indoctrination.

Getting people to be as tolerant of innocent things as they naturally are is always an uphill battle against puritans and censors.
People become puritans and censors in the first place because they believe they cannot solve their disagreements with logical argument, so they try to paint those which they feel tensions with as bad people in whatever way they can. So if Hitler ate sugar, and sugar is a niche rather than popular luxury, you start portraying sugar as evil, and the tolerance of respectable society narrows further: no more sugar for the few who enjoy it and don't like hurting people.

Comment by LVSN on Privacy and Manipulation · 2021-12-05T07:35:26.868Z · LW · GW

It's not okay to ostracize people for behavior which actually-innocently violates weak social heuristics ("red flags"), and it's not okay to knowingly share information which makes people vulnerable to unjust ostracization. This post strongly violates my better-than-average heuristics for actually-abusive (not social-reality-abusive) behavior. The world is not a fair place; your post offers nothing in the way of tolerance for the actually-innocent.

Comment by LVSN on Split and Commit · 2021-11-21T14:43:41.143Z · LW · GW

ahaha

Comment by LVSN on Split and Commit · 2021-11-21T11:11:39.457Z · LW · GW

Ten billion times Yes.

Comment by LVSN on Sasha Chapin on bad social norms in rationality/EA · 2021-11-17T12:05:25.274Z · LW · GW

It seems like any cultural prospiracy to increase standards to exceptional levels, which I see as a good thing, would be quickly branded as 'toxic' by this outlook. It is a matter of contextual objection-solving whether or not large parts of you can be worse than a [desirably [normatively basic]] standard. If it is toxic, then it is not obvious to me that toxicity is bad, and if toxicity must be bad, then it is not clear to me that you can, in fair order, sneak in that connotation by characterizing rationalist self-standards as toxic.

Comment by LVSN on Taking a simplified model · 2021-11-16T23:20:59.951Z · LW · GW

But surely there are not *only* differences, right? Some features of sub-Dunbar groups generalize to super-Dunbar groups. I want to know the full Venn diagram; otherwise I would lose a tool which may on average be more useful (e.g. there may be more useful similarities for my particular interests than there are for your interests).

Comment by LVSN on Depositions and Rationality · 2021-11-04T01:15:15.149Z · LW · GW

The mindset being employed here is extremely insensitive to the evidence that, actually, things are complex and people aren't just "being evasive".

Comment by LVSN on [deleted post] 2021-11-02T04:55:10.067Z

My impression is that when rationalists make objections, they tend not to explicitly distinguish between correcting failure and revealing possible improvements. 

If A is abstractly true, and B is 
1. abstractly true 
2. superficially contradictory with A
3. true in a more relevant way most of the time to most people

I expect rationalists who want to prioritize B to speak as if issuing corrections to people who focus on A, instead of being open-minded that there's good reason for A in unrecognized/rare(ly considered) but necessarily existing contexts, and instead of offering their personal impression of what an improvement would look like as merely that: a personal impression.

In spite of this, I still love you guys more than any other culture; love your ambition, clarity of judgment, and charitability. I'm not a post-rat; I struggle with rationality.

Comment by LVSN on [deleted post] 2021-10-31T19:51:06.559Z

Is there a decision theory which works as follows? (Is there much literature on decision theories given iterated dilemmas/situations?)

If my actual utility doesn't match my expected utility, something went wrong.
Whatever my past self could have done better in this kind of situation in order to make actual utility match the expected utility is what I should do right now. If the patch (aka lesson) mysteriously works, why it works isn't an urgent matter to attend to, although further optimization may be possible if the nature of the patch (aka lesson) is better understood.

Comment by LVSN on Tell the Truth · 2021-10-31T18:38:18.348Z · LW · GW

That which can be destroyed by abstract truths might also be abstractly true. 

Only when you are dealing with claims which represent fully formalized intuitions does it apply that 'that which can be destroyed by the truth (is false and therefore) should be.'

Abstract imperatives like "don't be a dick" and "be cool to each other" are important to remember even if you have a very good formalization, because you basically never know if you've really formalized the full set of intuitions, or if you've only formalized some parts of the set of intuitions which e.g. "don't be a dick" and "be cool to each other" capture.

On the other hand, I am curious as to what would happen if we formalized intuitions about levels of abstraction in general.

Comment by LVSN on Tell the Truth · 2021-10-30T23:30:53.685Z · LW · GW

You are a hero to me.

Comment by LVSN on Tell the Truth · 2021-10-30T23:28:51.270Z · LW · GW

https://imgs.xkcd.com/comics/sheeple.png

My impression is that we would be in relative heaven by now if this image realistically represented the thoughts of most people as it implicitly intends to. Most people would rather prevent legibility of comparison.

Comment by LVSN on Is nuking women and children cheaper than firebombing them? · 2021-10-29T21:45:04.565Z · LW · GW

Depravity is not a real problem. 

Anyways I'm confused by your initial reaction. I'll pretend you said something other than depravity; I'll pretend you mentioned some kind of actual real problem, like non-[meta-wanted] unwanted suffering. 

Just measure the suffering and do the calculation. 

I understand one's uncertainty about how much (non-[meta-wanted] unwanted) suffering a human life is worth, as well as one's uncertainty about how much money is worth how much suffering.

But the global facts of your conditional perferences don't go away just because the local facts (a subspace of all possible situational facts) you have to deal aren't the facts of the conditions of those preferences. Not thinking about the questions doesn't make them go away.

This is why I value ideal speech situations. I can't pretend to have solve ethics by myself. Someone will have good, hard questions for whatever my theory is. And I don't trust (at this time)* Committees For Solving Ethics to give due reverence to ideal speech situations, nor [good, hard] questions. 

*(I may be surprised.)

Endorsed counterperspective: suffering-upon-learning-about can turn out to be a good heuristic for structural issues related to suffering. It might also be inherently meaningful, but hopefully there is something deeper than just culture underneath the unhappiness-upon-learning-about, otherwise there are some bullets to bite about so-called social progress, which I find too implausible. 

Nature does not create ideal situations for doing things that do not harm others; that's the problem. Then humans rationalize the cruelty of nature.

Comment by LVSN on Is nuking women and children cheaper than firebombing them? · 2021-10-28T22:23:11.452Z · LW · GW

You know what else is depraved? Kissing. You're literally putting orifices against orifices. Also homosexuality is depraved. But thank god cost-benefit analysis wins out sometimes over "waah waah, depravity".

Comment by LVSN on [deleted post] 2021-10-15T13:36:24.387Z

In this shortform, I want to introduce a concept of government structure I've been thinking of called representative omnilateralism. I do not know if this system works and I do not claim that it does. I'm asking for imagination here.

Representative (subagent) omnilateralism: A system under which one person or a groups of people tries to reconcile the interests of all people(/subagents) in a nation (instead of just the majority of the nation) into one plan that satisfies all people(/subagents)*

I think "representative democracy" is an ambiguous term which can be used to mean some mix of representative ochlocracy or representative omnilateralism whenever the situation is convenient. The reason we like democracy is because it approximates direct omnilateralism better than other alternatives, but if we will permit *representative* democracy, why not representative omnilateralism? Much as direct democracy is a purer ochlocracy in theory, representative omnilateralism is a purer elitism and a purer defense of the less fortunate in theory, but direct omnilateralism literally has the best of all worlds. 

My impression (not verdict) is that direct omnilateralism is impossible in practice only because people are not equipped to negotiate optimally. If everyone was better at negotiating, we would have way fewer conflicts and far more business.

*(I mention subagents because people often do not accept parts of themselves which are innocent, which is a personal mistake as well as a commons mistake; direct subagent omnilateralism is an even higher aspiration than direct superagent omnilateralism)

Comment by LVSN on Is nuking women and children cheaper than firebombing them? · 2021-10-14T16:41:57.716Z · LW · GW

The confusion here is in the word "cost". In the context of lsusr's post, costs and cheapness are framed in terms of monetary costs and cheapness, yet I ask: why not consider moral costs as real, decision-critical costs? Then seek to reduce all decision-critical costs, whether moral, instrumental, or otherwise.

Comment by LVSN on [deleted post] 2021-09-28T16:06:32.179Z

I want to DM about rationality. Direct messaging is not subject to so many incentive pressures. Please DM me. Please let me be free. 

Please DM me please DM me please DM me please DM me * 36

I'm looking for someone who I can share my half-baked rambly thoughts with. Shortform makes me feel terrible. 

Please DM me; let me be free; please DM me; let me be free * 105

Comment by LVSN on [deleted post] 2021-09-12T21:19:30.869Z

When a bailey is actually true, there are better ways to show it - in those cases they ARE in the motte.

Endorsed.

Comment by LVSN on [deleted post] 2021-09-12T18:09:23.184Z

Just because there are mottes and baileys, doesn't mean the baileys are wrong; they may just be less defensible in everyday non-ideal speech situations.

Comment by LVSN on [deleted post] 2021-09-07T17:28:18.888Z

To whatever extent culture must pass through biology (e.g. retinas, eardrums, stomach) before having an effect on a person, and to whatever extent culture is invented through biological means, cultural inputs are entirely compatible with biological determinism.

Comment by LVSN on [deleted post] 2021-09-07T16:15:50.519Z

Deadlines: X Smooth, Y Sharp

Recently an acquaintance told me we had to be leaving at "4:00 PM sharp." 

Knowing of the planning fallacy, I asked "Sharp? Uh-oh. As a primate animal, I naturally tend not to be very good with sharp deadlines, though your information is useful. Could you tell me when we're leaving smooth?"

"Smooth? What do you mean?"

"Smooth as opposed to sharp. Like, suppose you told me the time I should be aiming to be ready for in order to compensate for the fact that humans are bad at estimating time costs. Let's say you wanted to create a significant buffer between the time I was ready and the time I had to be ready by; the beginning of that significant buffer is the time we're leaving by smooth."

Since then, we've been saying things in the structure "X smooth, Y sharp", where X and Y are times or amounts of time. It's intuitive, catchy, simple, and very useful.

Comment by LVSN on Kids Learn by Copying · 2021-09-07T16:05:09.271Z · LW · GW

I don't take it for granted that saying something very beautiful but doing something contradictorily ugly and cynicism-inducing is less insane, nor, if it is necessarily sane, do I take it for granted that sanity is the thing we should be striving for in that case.

Comment by LVSN on Kids Learn by Copying · 2021-09-07T15:55:30.937Z · LW · GW

These two possibilities are not mutually exclusive; talking is a thing that people do. The correct answer is that it's the latter case (verbal theory) as an instance of the former category of cases (cases where people copy the behavior of others, such as fashions of thinking and talking).

I'm also not very sure that removing the ability to negotiate theories of objectivity or fairness, which are naturally controversial subjects, would make people more peaceful on average given it as a limiting condition on the deveopment of culture starting with the first appearance of any human communication; I expect it would make world histories more violent on average to remove such an ability.

Comment by LVSN on [deleted post] 2021-09-07T14:38:13.440Z

What is normally called common sense is not common sense. Common sense is the sense that is actually common. Idealized common sense (which, I shall elaborate, is the union of the set of thoughts you would have to be carefully trying to be common-sensical in order make salient in your mind and the set of natural common sense thoughts) should be called something other than common sense, because making a wide-sweeping mental search about possible ways of being common-sensical is not common, even if a general deference and post-hoc accountability to the concept of common sense may be common.

Comment by LVSN on [deleted post] 2021-09-07T13:54:21.844Z

My response to it is: What makes you think it is naive idiocy? It seems like naive intelligence if anything. Even if the literal belief is false, that doesn't make it a stupid thing to act as if true. If everyone acted as if it were true, it would certainly be a stag-hunt scenario! And the benefits are still much worthwhile even if the other does not perfectly cooperate. 

Stupid uncritical intolerant people will think you look childish and impertinent, but intelligent people will notice you're being bullied and you're still tolerating your interlocutor, and they will think you're super-right. You divide the world into intelligent+pro-you and stupid+against-you.

Also I might note that your attempted counter-example has an implied tone which accuses naive idiocy, rather than sounding curious with salient plausibility. The saliently plausible thing, in your attempted counter-example, is an implicit gesture that there is not a difference.

Comment by LVSN on [deleted post] 2021-09-07T13:34:10.673Z

Lately I've been thinking about what God would want from me, because I think the idea was a good influence on my life. Here's a list in progress of some things I think whould characterize God's wants and judgments:

  • 1. God would want you to know the truth
  • 2. If you find yourself flinching at knowledge of serious risk factors (e.g. of your character or moral plans), God would urgently want to speak with you about it
  • 3. Resist the pull of natural wrongness
  • 3.1. Consider all of the options which are such that you would have to be looking for the obvious/common sense options in order to find them
  • 3.2. Consider many non-obvious options; consider that the right thing to do is a different concretization of an abstracted version of the wrong thing to do, is adjacent to the wrong or seemingly-right thing to do, queers the seemingly-right or wrong thing to do, or is a thing in a category which cuts sideways through categories of abstractly right or wrong things to do
  • 3.3. Every night, go over a list in progress of cognitive biases and search your memories and feelings honestly as to whether you gave into any of them
  • 4. By one third of the set of good definitions of 'making progress' that you can come up with, or by no more than six good definitions out of eighteen, make it 80% true about you that you are making progress; don't be going nowhere
  • 4.1. On an average rate of twice every five days, do a good day's work
  • 4.2. On an average rate of once every three weeks, spend a day working really hard
  • 4.3. For every extra amount of work beyond the rates specified above, God will be extra proud of you, which can become a source of great esteem and comfort.
  • 5. Reward yourself temperantly for making progress and resisting the pull of natural wrongness; your morality should be as an enlightened, wiser-than-you friend who you eagerly wish you were strong enough to follow; not a slaveholder making you regret your acquaintanceship.
  • 6. In your life, always be faithful and reliable to at least one great moral principle; have one moral job or nature that God will consider you remarkable for
  • 7. Recognize the vulnerability of others as unsettingly reminiscent of the vulnerability in yourself

Feel free to leave suggestions for more entries; aim for excellence, and if you feel honestly that your suggestion is excellent in spite of acknowledged strong possibilities that it may be subjective and biased, don't hesitate to share. Or, hesitate the right amount before sharing; either is good.

Comment by LVSN on [deleted post] 2021-09-07T06:16:17.559Z

I am convinced that moral principles are contributory rather than absolute. I don't like the term 'particularist'; it sounds like a matter of arbitration when you put it that way; I am very reasonable about what considerations I allow to contribute to my moral judgments. I would prefer to call my morality contributist. I wonder if it makes sense to say that utilitarians are a subset of contributists.

Comment by LVSN on [deleted post] 2021-09-07T06:03:37.531Z

I found the Defeasible Reasoning SEP page because I found this thing talking about defeasible reasoning, which I found because I googled 'contextualist Bayesian'.

Comment by LVSN on [deleted post] 2021-09-07T04:35:59.263Z

Googling 'McCarthy Logic of Circumscription' brought me here; very neat.

Comment by LVSN on [deleted post] 2021-09-07T04:26:13.485Z

Interesting stuff from the Stanford Encyclopedia of Philosophy:

2.8 Occam’s Razor and the Assumption of a “Closed World”

Prediction always involves an element of defeasibility. If one predicts what will, or what would, under some hypothesis, happen, one must presume that there are no unknown factors that might interfere with those factors and conditions that are known. Any prediction can be upset by such unanticipated interventions. Prediction thus proceeds from the assumption that the situation as modeled constitutes a closed world: that nothing outside that situation could intrude in time to upset one’s predictions. In addition, we seem to presume that any factor that is not known to be causally relevant is in fact causally irrelevant, since we are constantly encountering new factors and novel combinations of factors, and it is impossible to verify their causal irrelevance in advance. This closed-world assumption is one of the principal motivations for McCarthy’s logic of circumscription (McCarthy 1982; McCarthy 1986).

3. Varieties of Approaches

We can treat the study of defeasible reasoning either (i) as a branch of epistemology (the theory of knowledge), or (ii) as a branch of logic. In the epistemological approach, defeasible reasoning can be studied as a form of inference, that is, as a process by which we add to our stock of knowledge. Alternatively, we could treat defeat as a relation between arguments in a disputational discourse. In either version, the epistemological approach is concerned with the obtaining, maintaining, and transmission of warrant, with the question of when an inference, starting with justified or warranted beliefs, produces a new belief that is also warranted, given potential defeaters. This approach focuses explicitly on the norms of belief persistence and change.

In contrast, a logical approach to defeasible reasoning fastens on a relationship between propositions or possible bodies of information. Just as deductive logic consists of the study of a certain consequence relation between propositions or sets of propositions (the relation of valid implication), so defeasible (or nonmonotonic) logic consists of the study of a different kind of consequence relation. Deductive consequence is monotonic: if a set of premises logically entails a conclusion, than any superset (any set of premises that includes all of the first set) will also entail that some conclusion. In contrast, defeasible consequence is nonmonotonic. A conclusion follows defeasibly or nonmonotonically from a set of premises just in case it is true in nearly all of the models that verify the premises, or in the most normal models that do.

The two approaches are related. In particular, a logical theory of defeasible consequence will have epistemological consequences. It is presumably true that an ideally rational thinker will have a set of beliefs that are closed under defeasible, as well as deductive, consequence. However, a logical theory of defeasible consequence would have a wider scope of application than a merely epistemological theory of inference. Defeasible logic would provide a mechanism for engaging in hypothetical reasoning, not just reasoning from actual beliefs.

Comment by LVSN on [deleted post] 2021-09-07T02:19:21.185Z

In defense of strawmanning: there's nothing wrong with wanting to check if someone else is making a mistake. If you forget to frame it as a question, e.g. "Just wanna make sure: what's the difference between what you're thinking and the thinking of what my more obviously bad, made-up person who speaks similarly to you?" Then the natural way it comes out will sound accusatory, as in our typical conception of strawmanning. 

I think most people strawman because it's shorthand for this kind of attempt to check, but then they're also unaware that they're just trying to check, and they wind up defending their (actually accidental) apparent hostility, and then a polarization happens.

Strawmanning happens when we take others' judgments as plausible evidence of more general models and habits that those judgments play a part in. By asking for clarity of what models inform a judgment, we can get better over time at inferring models from judgments. It can become a limited form of mind reading.

Comment by LVSN on Three Principles to Writing Original Nonfiction · 2021-09-07T01:44:44.445Z · LW · GW

I was never a fan of this advice to remove all reference to the self when making a statement. If you think everything is broken or complicated and you don't think you have strong reasons to think you're doing any better than average, why pretend that everything is fine and we can just be authorities on the way that things are rather than how they impressed us as being?

My English teacher took off grades every time I explained things as though from a perspective as humble and precarious as honest, good epistemics require me to report; using terms like "I think" and "it may be the case". 

Now, I could understand if the idea was that no one knew anything and we were all just roleplaying and that school was there to teach me to roleplay. But her defense against my skepticism towards non-subjective reporting was, and I quote, "there's a system for how these things work; it'll be explained in later grades."

It was at that time I was getting truly fed up with my educators. I will not lie about my confidence in my authority.

Comment by LVSN on Kids Learn by Copying · 2021-09-06T07:28:10.389Z · LW · GW

Most people talk a lot about how they hate hypocrites. Hypocrites say you're supposed to do one thing, and then they do another, and people don't like that. I can understand admitting that it is hard to live in accordance with your stated standards, but people shouldn't lie that they believe there is anything plausibly contextually good about a standard when they don't actually believe there is anything plausibly contextually good about the standard. Otherwise you can't hold people accountable to the standards that both of you say you think have something maybe-good about them.

Of course, there is an important distinction between lying and simulacrum level 3, wherein everyone understands the situation. People who aren't in on the simulacrum level 3 shouldn't be punished for wanting to understand the inconsistency. Once the inconsistency is explained, there is no problem. The explanation should be open for everyone to see, so as not to discriminate against those who still don't know. I don't think it's autistic to be unaware of the reasons for every weird inconsistency between word and action, and it's definitely not autistic to ask about them. 

No one is simulacrum-3-omniscient, and everyone is born with very little knowledge of simulacrum 3 situations. It would be poorly calibrated to expect a consistent flow of uninterrupted simulacrum 3 stability given how little most people know.

Comment by LVSN on Kids Learn by Copying · 2021-09-06T05:12:47.448Z · LW · GW

Would it be better or worse if someone's takeaway from this post was that no one should reason about what makes a course of action or policy better or worse? That they should just copy other people?

What if copying other people meant burning suspected witches alive? What if some people who burn witches aren't really sure about the correctness of what they're doing, they care about that kind of thing, and yet they profess great certainty that their acts are in accordance with correct values? Should I not try to play to the part of them which is uncertain in order to prevent a cruel outcome, against their initial value statement?

Would you want me to try to give you values which aren't upstream from genocide, if you were born in a place which gave you values that were upstream from genocide?

The topics which are being invoked are way more fraught than is being implied. It's not obvious that you're not sneaking in a takeaway for a general topic by using this one question in a case where the takeaway doesn't generalize across the space of questions in that topic. On good faith, I don't assume you're trying to do that, but it's good to check; lampshade the possibility.

Comment by LVSN on Rafael Harth's Shortform · 2021-09-05T08:38:06.823Z · LW · GW

If you think it's very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you'll come up with better interpretations for the thing you think you're confident about) on just so many interpretations before you consider more ambitious numbers like 90%. 

If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don't pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.

Comment by LVSN on [deleted post] 2021-09-05T00:38:43.021Z

One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU="(Model Uncertainty) I'm confused, maybe the question doesn't make sense, maybe A isn't a coherent claim, maybe the concepts I used aren't the right concepts to use, maybe I didn't think of a possibility, etc. etc.". 

I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%

And so on. Are you telling me that "~A: 75%" means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect 'A: 75%' to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can't a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?

Comment by LVSN on [deleted post] 2021-09-04T09:46:46.401Z

My shortform post yesterday about proposition negations, could I get some discussion on that? Please DM me if you like! I need to know if and where there's been good discussion about how Bayesian estimate tracking relates with negation! I need to know if I'm looking at it the wrong way!

Comment by LVSN on [deleted post] 2021-09-03T20:40:32.991Z

Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don't answer that; the negation is 55% likely.

But we can imagine making a judgment about someone's personality. One human person accepts MBTI's framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of course; being a feeler and/or a thinker are two independent questions, buddy.

The models in a person's mind are predictable from the estimate on his paper, and while his estimate may be true, the models the predictions stem from may be deeply flawed.

By the logic of personality taxonomy and worldly relations, "the negation of A" has many connotations.

Maybe the trouble is with the words 'negation', 'opposite', and 'falsehood' instead of using the word 'absence'. Presence of falsehood evidence is not the same as absence of truth evidence, even if absence of truth evidence is one kind of weak falsehood evidence to be present.

Comment by LVSN on Rafael Harth's Shortform · 2021-09-03T19:40:16.265Z · LW · GW

Someone (Tyler Cowen?) said that most people ought assign much lower confidences to their beliefs, like 52% instead of 99% or whatever.

oops I have just gained the foundational insight for allowing myself to be converted to (explicit probability-tracking-style) Bayesianism; thank you for that

I always thought "belief is when you think something is significantly more likely than not; like 90%, or 75%, or 66%." No; even just having 2% more confidence is a huge difference given how weak existing evidence is.

If one really rational debate-enjoyer thinks A is 2% likely (compared to the negation of A, which is at negative 2%), that's better than a hundred million people shouting that the negation of A is 100% likely.

Comment by LVSN on Can you control the past? · 2021-08-31T04:42:43.707Z · LW · GW

I just love this quote. (And, I need it in isolation so I can hyperlink to it.)

"When I step back in Newcomb’s case, I don’t feel especially attached to the idea that it the way, the only “rational” choice (though I admit I feel this non-attachment less in perfect twin prisoner’s dilemmas, where defecting just seems to me pretty crazy). Rather, it feels like my conviction about one-boxing start to bypass debates about what’s “rational” or “irrational.” Faced with the boxes, I don’t feel like I’m asking myself “what’s the rational choice?” I feel like I’m, well, deciding what to do. In one sense of “rational” – e.g., the counterfactual sense – two-boxing is rational. In another sense – the conditional sense — one-boxing is. What’s the “true sense,” the “real rationality”? Mu. Who cares? What’s that question even about? Perhaps, for the normative realists, there is some “true rationality,” etched into the platonic realm; a single privileged way that the normative Gods demand that you arrange your mind, on pain of being… what? “Faulty”? Silly? Subject to a certain sort of criticism? But for the anti-realists, there is just the world, different ways of doing things, different ways of using words, different amounts of money that actually end up in your pocket. Let’s not get too hung up on what gets called what."
— Joe Carlsmith

Comment by LVSN on The Death of Behavioral Economics · 2021-08-24T15:47:11.855Z · LW · GW

(surprised) No way!! I bought that book three months ago, at the recommendation of no one. I haven't read it yet, but it's good to see that I have made a good investment on my own judgment.

Comment by LVSN on [deleted post] 2021-08-20T07:06:08.790Z

Some subset of those who agree that 'when two people disagree, only one of them can be right' and the people who agree that A : A := 'when two people disagree, they can both be right' such that A A' and A' := 'when two people "disagree," they might not disagree, and they can both be right', do not have a disagreement that cashes out as differences in anticipated experiences, and therefor may only superficially disagree.

Note 1: in order for this to be unambiguously true, 'anticipated experiences' necessarily includes anticipated experiences given counterfactual conditions.

Note 1.1: Counterfactuals are not contrary to facts; they have attributes which facts can also share, and, under varying circumstances, the ratio of [the set of relevant shared attributes] to [the set of relevant unshared attributes] between a counterfactual situation and known situation may be sufficiently large that it becomes misleading to characterize the situations as [opposite] or [mostly disagreeing, as opposed to mostly agreeing]. A more fitting word would be 'laterofactual'.

Note 1.1.1: When people say that B : B := 'C and D disagree', the set of non-excluded non-[stupidly interpretable] implicatures of the statement B includes that E : E := 'C and D mostly disagree', and not only F : F := 'C and D have any amount of disagreement'.