Posts

What are some good language models to experiment with? 2023-09-10T18:31:50.272Z
Aumann-agreement is common 2023-08-26T20:22:03.738Z
A content analysis of the SQ-R questionnaire and a proposal for testing EQ-SQ theory 2023-08-09T13:51:02.036Z
If I showed the EQ-SQ theory's findings to be due to measurement bias, would anyone change their minds about it? 2023-07-29T19:38:13.285Z
Autogynephilia discourse is so absurdly bad on all sides 2023-07-23T13:12:07.982Z
Boundary Placement Rebellion 2023-07-20T17:40:00.190Z
Prospera-dump 2023-07-18T21:36:13.822Z
Are there any good, easy-to-understand examples of cases where statistical causal network discovery worked well in practice? 2023-07-12T22:08:59.916Z
I think Michael Bailey's dismissal of my autogynephilia questions for Scott Alexander and Aella makes very little sense 2023-07-10T17:39:26.325Z
What in your opinion is the biggest open problem in AI alignment? 2023-07-03T16:34:09.698Z
Which personality traits are real? Stress-testing the lexical hypothesis 2023-06-21T19:46:03.164Z
Book Review: Autoheterosexuality 2023-06-12T20:11:38.215Z
How accurate is data about past earth temperatures? 2023-06-09T21:29:11.852Z
[Market] Will AI xrisk seem to be handled seriously by the end of 2026? 2023-05-25T18:51:49.184Z
Horizontal vs vertical generality 2023-04-29T19:14:35.632Z
Core of AI projections from first principles: Attempt 1 2023-04-11T17:24:27.686Z
Is this true? @tyler_m_john: [If we had started using CFCs earlier, we would have ended most life on the planet] 2023-04-10T14:22:07.230Z
Is this true? paulg: [One special thing about AI risk is that people who understand AI well are more worried than people who understand it poorly] 2023-04-01T11:59:45.038Z
What does the economy do? 2023-03-24T10:49:33.251Z
Are robotics bottlenecked on hardware or software? 2023-03-21T07:26:52.896Z
What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys 2023-03-12T11:42:32.614Z
What do you think is wrong with rationalist culture? 2023-03-10T13:17:28.279Z
What are MIRI's big achievements in AI alignment? 2023-03-07T21:30:58.935Z
🤔 Coordination explosion before intelligence explosion...? 2023-03-05T20:48:55.995Z
Prediction market: Will John Wentworth's Gears of Aging series hold up in 2033? 2023-02-25T20:15:11.535Z
Somewhat against "just update all the way" 2023-02-19T10:49:20.604Z
Latent variables for prediction markets: motivation, technical guide, and design considerations 2023-02-12T17:54:33.045Z
How many of these jobs will have a 15% or more drop in employment plausibly attributable to AI by 2031? 2023-02-12T15:40:02.999Z
Do IQ tests measure intelligence? - A prediction market on my future beliefs about the topic 2023-02-04T11:19:29.163Z
What is a disagreement you have around AI safety? 2023-01-12T16:58:10.479Z
Latent variable prediction markets mockup + designer request 2023-01-08T22:18:36.050Z
Where do you get your capabilities from? 2022-12-29T11:39:05.449Z
Price's equation for neural networks 2022-12-21T13:09:16.527Z
Will Manifold Markets/Metaculus have built-in support for reflective latent variables by 2025? 2022-12-10T13:55:18.604Z
How difficult is it for countries to change their school curriculum? 2022-12-03T21:44:56.830Z
Is school good or bad? 2022-12-03T13:14:22.737Z
Is there some reason LLMs haven't seen broader use? 2022-11-16T20:04:48.473Z
Will nanotech/biotech be what leads to AI doom? 2022-11-15T17:38:18.699Z
Musings on the appropriate targets for standards 2022-11-12T20:19:38.939Z
Internalizing the damage of bad-acting partners creates incentives for due diligence 2022-11-11T20:57:41.504Z
Instrumental convergence is what makes general intelligence possible 2022-11-11T16:38:14.390Z
Have you noticed any ways that rationalists differ? [Brainstorming session] 2022-10-23T11:32:13.368Z
The highest-probability outcome can be out of distribution 2022-10-22T20:00:16.233Z
AI Research Program Prediction Markets 2022-10-20T13:42:55.113Z
Is the meaning of words chosen/interpreted to maximize correlations with other relevant queries? 2022-10-20T10:03:19.931Z
Towards a comprehensive study of potential psychological causes of the ordinary range of variation of affective gender identity in males 2022-10-12T21:10:46.440Z
What sorts of preparations ought I do in case of further escalation in Ukraine? 2022-10-01T16:44:58.046Z
Resources to find/register the rationalists that specialize in a given topic? 2022-09-29T17:20:19.752Z
Renormalization: Why Bigger is Simpler 2022-09-14T17:52:50.088Z
Are smart people's personal experiences biased against general intelligence? 2022-04-21T19:25:26.603Z

Comments

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T19:40:26.896Z · LW · GW

I tend to think of ideology as a continuum, rather than a strict binary. Like people tend to have varying degrees of belief and trust in the sides of a conflict, and various unique factors influencing their views, and this leads to a lot of shades of nuance that can't really be captured with a binary carnist/not-carnist definition.

But I think there are still some correlated beliefs where you could e.g. take their first principal component as an operationalization of carnism. Some beliefs that might go into this, many of which I have encountered from carnists:

  • "People should be allowed to freely choose whether they want to eat factory-farmed meat or not."
  • "Animals cannot suffer in any way that matters."
  • "One should take an evolutionary perspective and realize that factory farming is actually good for animals. After all, if not for humans putting a lot of effort into farming them, they wouldn't even exist at their current population levels."
  • "People who do enough good things out of their own charity deserve to eat animals without concerning themselves with the moral implications."
  • "People who design packaging for animal products ought to make it look aesthetically pleasing and comfortable."
  • "It is offensive and unreasonable for people to claim that meat-eating is a horribly harmful habit."
  • "Animals are made to be used by humans."
  • "Consuming animal products like meat or milk is healthier than being strictly vegan."

One could make a defense of some of the statements. For instance Elizabeth has made a to-me convincing defense of the last statement. I don't think this is a bug in the definition of carnism, it just shows that some carnist beliefs can be good and true. One ought to be able to admit that ideology is real and matters while also being able to recognize that it's not a black-and-white issue.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T07:27:56.949Z · LW · GW

I agree in principle, though someone has to actually create a community of people who track the truth in order for this to be effective and not be outcompeted by other communities. When working individually, people don't have the resources to untangle the deception in society due to its scale.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T19:56:23.633Z · LW · GW

But the fact that the wider world is so confused there's no point in pushing for truth is the point. EA needs to stay better than that, and part of that is deescalating the arms race when you're inside its boundaries. 

Agree with this. I mean I'm definitely not pushing back against your claims, I'm just pointing out the problem seems bigger than commonly understood.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T19:54:57.124Z · LW · GW

Could you expand on why you think that it makes a significant difference?

  • E.g. if the goal is to model what epistemic distortions you might face, or to suggests directions of change for fewer distortions, then coherence is only of limited concern (a coherent group might be easier to change, but on the other hand it might also more easily coordinate to oppose change).
  • I'm not sure why you say they are not an ideology, at least under my model of ideology that I have developed for other purposes, they fit the definition (i.e. I believe carnism involves a set of correlated beliefs about life and society that fit together).
  • Also not sure what you mean by carnists not having an agenda, in my experience most carnists have an agenda of wanting to eat lots of cheap delicious animal flesh.
Comment by tailcalled on Aumann-agreement is common · 2023-09-29T15:35:26.215Z · LW · GW

It is true that the original theorem relies on common knowledge. In my original post, I phrased it as "a family of theorems" because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn't get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of "If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have.", is something I'd suggest is in the same family as Aumann's agreement theorem.

The reason for my post is that a lot of people find Aumann's agreement theorem counterintuitive and feel like its conclusion doesn't apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann's agreement theorem defines "disagreement" extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.

I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven't thought about general tendencies for agreement).

More generally I have a whole framework of disagreement and beliefs that I intend to write about.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T07:59:44.281Z · LW · GW

In the followup, I admit you don't have to choose as long as you don't give up on untangling the question. So like I'm implying that there's multiple options such as:

  • Try to figure it out (NinetyThree rejects this, "not really open to persuasion")
  • Adopt the carnist side (I think NinetyThree probably broadly does this though likely with exceptions)
  • Adopt the vegan side (NinetyThree rejects this)

Though I suppose you are right that there are also lots of other nuanced options that I haven't acknowledged, such as "decide you are uncertain between the sides, and e.g. use utility weights to manage risk while exploiting opportunities", which isn't really the same as "try to figure it out". Not sure if that's what you mean; another option would be that e.g. I have a broader view of what "try to figure it out" means than you do, or similar (though what really matters for the literal truth of my comment is what NinetyThree's view is). Or maybe you mean that there are additional sides that could be adopted? (I meant to hint at that possibility with phrasings like "the most common side", but I suppose that could also be interpreted to just be acknowledging the vegan side.) Or maybe it's just "all of the above"?

I do genuinely think that there is value in thinking of it as a 2D space of tradeoffs for cheap epistemics <-> strong epistemics and pro animal <-> pro human (realistically one could also put in the environment too, and realistically on the cheap epistemics side it's probably anti human <-> anti animal). I agree that my original comment lacked nuance wrt the ways one could exist within that tradeoff, though I am unsure to what extent your objection is about the tradeoff framing vs the nuance in the ways one can exist in it.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T07:35:52.474Z · LW · GW

Yes, I tried participating in this twice and am probably somewhat inspired by it.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T07:28:06.512Z · LW · GW

Does it also not seem true in the context of my followup clarification?

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T07:11:58.170Z · LW · GW

Ok I'm getting downvoted to oblivion because of this, so let me clarify:

So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals.

If, like NinetyThree, you decide to give up on untangling the question for yourself because of all the lying ("I would describe my epistemic status as "not really open to persuasion""), then you still have to make decisions, which in practice means following some side in the conflict, and the most common side is the carnist side which has the problems I mention.

I don't want to be in a situation where I have to give up on untangling the question (see my top-level comment proposing a research community), but if I'm being honest I can't exactly say that it's invalid for NinetyThree to do so.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T07:05:26.410Z · LW · GW

I would really like to have a community of people who take truth-seeking seriously. While I can do some research, the world is too big for me to research most things. Furthermore, the value of the research that I do could be much bigger if others could benefit from it, but this would require a community that upholds proper epistemic standards towards me and communicates value of information well. I assume other people face the same problems, of not having the resources to research everything, and finding that it is inefficient for them to research the things they do research.

I think this can be fixed by getting a couple of honest people representing different interests together for each topic, and having them perform research that answers the most commonly relevant question on the topic and writing up the answers in a convenient format.

(At least up to a point? People are, probably rightfully, skeptical that this approach can be used to research who is an abuser or not. But for "scientific" questions like veganism, which concern subjects that are present in many places across the world like human nutritional needs or means of food production, and therefore feasible to collect direct information on without too much interference, it seems like it should be feasible.)

The rationalist community seems too loosely organized to handle this automatically. The EA community seems too biased and maybe also too loose to handle it. So I would like to create a community within rationalism to address it. For now, here is a Discord link for it: https://discord.gg/sTqMq8ey

Note that I don't mean this to bash vegans. While the vegan community is often dishonest, I have the impression that the carnist community is also often dishonest. I think that people on all sides are too focused on creating counternarratives to places where they are being attacked, instead of creating actionable answers to important questions, and I would like a community that just focuses on 1) figuring out what questions people have, and 2) answering them as accurately as possible, in easy-to-understand formats, and communicating the ranges of uncertainty and the raw evidence used to answer them.

Comment by tailcalled on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T06:37:59.804Z · LW · GW

My impression is that while vegans are not truth-seekings, carnists are also not truth-seeking. This includes by making ag-gag laws, putting pictures of free animals on packages containing factory farmed animal flesh, denying that animals have feelings and can experience pain using nonsense arguments, hiding information about factory farming from children, etc..

So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals. And I suppose as a human it's easier to be in the latter, as long as you don't mind hiring people to torture animals for your pleasure.

Edit/clarification: I don't mean that you can't choose to figure it out in more detail, only that if you do give up on figuring it out in more detail, you're more constrained.

Comment by tailcalled on Aumann-agreement is common · 2023-09-28T15:28:55.653Z · LW · GW

I don't think epistemic rationality doesn't matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.

Comment by tailcalled on Aumann-agreement is common · 2023-09-28T14:39:43.816Z · LW · GW

Believing in God leads to tons of real-world errors I think.

One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..

More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask "for what purpose?", since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn't have a goal to constrain one's categorization method.

Comment by tailcalled on Aumann-agreement is common · 2023-09-28T13:34:32.752Z · LW · GW

Yes, but I still don't see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann's agreement theorem.

Comment by tailcalled on Aumann-agreement is common · 2023-09-28T07:40:43.035Z · LW · GW

Knowing each other's probability for a statement requires exchanging information about which statement the probability is assigned to. In basically all of my examples, this was the information exchanged.

Comment by tailcalled on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T07:33:33.444Z · LW · GW

In the first poll I voted "Accurately reporting your epistemic state" because I feel like it is systemically important and sort of a foundation on which other things like not destroying the world can be built (e.g. avoiding actions that noticeably may lead to the destruction of the world is more effective if you are better at sharing information relevant to which actions do or do not destroy the world). However if I had known that it was a poll about the next Petrov day celebration, I would have voted "Avoiding actions that noticeably increase the chance that civilization is destroyed" because I feel like that's the point of Petrov day.

Comment by tailcalled on “X distracts from Y” as a thinly-disguised fight over group status / politics · 2023-09-25T16:10:28.816Z · LW · GW

But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for.

This works well for center-left progressives who genuinely believe that more immediate AI problems aren't an issue. However a complication is that there are techy right-wing and radical left-wing rationalists who are concerned about AI x-risk, and they are also for basically unrelated reasons concerned about things like getting censored by big tech companies. For them, "working on more immediate AI problems" might mean supporting the tech company overreach, which is something they are genuinely opposed to and which feeds into the status competition point you raised.

Anecdotally, it seems to me that the people who adopt this framing tend to be the ones with these beliefs, and in my own life my sympathy towards this framing has correlated with these beliefs.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-24T07:02:31.756Z · LW · GW

I had two separate arguments and it only breaks the second argument, not the first one.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-22T17:47:39.945Z · LW · GW

What makes you say I'm scolding you?

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-22T07:45:42.488Z · LW · GW

You write in an extremely fuzzy way that I find hard to understand. This is plausibly related to the motivation for your post; I think you are trying to justify why you don't need to make your thinking crisper? But if so I think you need to focus on it from the psychology/applications/communication angle rather than from the logic/math angle, as that is more likely to be a crux.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-21T19:50:33.881Z · LW · GW

First, a question, am I correct in understanding that when you write ~(A and ~A), the first ~ is a typo and you meant to write A and ~A (without the first ~)? Because  is a tautology and thus maps to true rather than to false.

Secondly, it seems to me that you'd have to severely mutilate your logic to make this nontrivial. For instance, rather than going by your relatively elaborate route, it seems like a far simpler route would be Earth flat and earth ~flat => Earth flat => Earth flat or Earth spherical or ....

Of course this sort of proof doesn't capture the paradoxicalness that you are aiming to capture. But in order for the proof to be invalid, you'd have to invalidate one of  and , both of which seem really fundamental to logic. I mean, what do the operators "and" and "or" even mean, if they don't validate this?

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-21T17:18:31.099Z · LW · GW

You should use a real-world example, as that would make the appropriate logical tools clearer.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-21T08:40:02.946Z · LW · GW

Also if you are getting into proof assistants then you should probably be aware that they use the term "truth-values" in a different way than the rest of math. In the rest of math, truth-values are an external thing based on the relationship between a statement and the domain the statement is talking about. However, in proof assistants, truth-values are often used to refer to the internal notion of subsets of a one-element set; P({1}).

So while it is not equivalent to excluded middle to say that either a statement is true or a statement is false, it is equivalent to excluded middle to say that a subset of {1} is either Ø or is {1}. The logic being that if you have some proposition P, you could form the subset S={P|x in {1}}, and if P then by definition S is {1} and if not P then by definition S=Ø, so if P or not P then S={1} or S=Ø.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-21T07:50:00.121Z · LW · GW

A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable.

Wait derp this is wrong. The fancy schmancy relational approach I used specifically has the strength that it allows models of uncountable sets because only the constructible elements actually need to have a representation.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-20T22:26:36.559Z · LW · GW

It appears to be commonly said (see the last paragraph of "Mathematical Constructivism"), that proof assistants like Agda or Coq rely on not assuming LoEM. I think this is because proof assistants rely on the principle of "you can't prove something false, only true." Theorems are the [return] types of proofs, and the "False" theorem has no inhabitants (proofs). 

Not clear what you mean by "because proof assistants rely on the principle of "you can't prove something false, only true". There's a sense in which all math relies on this principle, and therefore proof assistants also rely on it. But proof assistants don't rely on it more than other math does. If you make inconsistent assumptions within Agda or Coq, you can prove False, just as in any other math. And they follow the principle of explosion.

But yes, proof assistants often reject the law of excluded middle. They generally do so in order to obtain two properties known as the disjunction property and the existence property. The disjunction property says that if  is provable, then either  is provable or  is provable. The existence property says that if  is provable, then there is an expression  such that  is provable.

These properties reflect the fact that proofs in Agda and Coq carry a computational meaning, so one can "run" the proofs to obtain additional information about what was proven.

One cannot have both the disjunction property and the law of excluded middle, because together they imply that the logic is complete, and consistent logics capable of expressing arithmetic cannot be complete by Gödel's incompleteness theorems.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-20T22:16:25.558Z · LW · GW

I don't know for sure but I think it is too complex for mathematicians to have come up with a way of quantifying the complexity of it. However, I can say a bit about what I mean by "too complex":

The natural numbers  have a computable model. That is, in a nice language  such as the language of bitstrings , we can define a relation  where you can think of as meaning "the number  can be written with the bitstring ". This relation can be defined in such as way as to have certain nice properties:

  • For each of the usual primitive operations over  such as , there is a corresponding computable algorithm over  which preserves . For instance, there is an algorithm  such that for all  and , we have .
  • For each of the usual primitive relations over  such as , there is a corresponding computable algorithm over . For instance, there is an algorithm  such that for all  and .

When we have a relation  and algorithms  like this, rather than leaving the mathematical objects in question as abstract hypothetical thoughts, they become extremely concrete because you can stuff them into a computer and have the computer automatically answer questions about them.

This then raises the question of whether the sets  have a computable model, which depends on what one considers the primitives. When I think of primitive operations for sets, I think of operations such as . However, these are extremely powerful operations because they are inherently infinitary, and therefore allow you to solve very difficult problems.

For instance if we let  mean "the Turing machine indexed by  halts within  steps" (which in itself is a fairly tame proposition since you can answer it just by running Turing machine number x for y steps and seeing whether it halts), then one can solve the halting problem using set theoretic operations just by going . Since the halting problem is uncomputable, we must infer that any computable model of sets must be restricted to a language which lacks either subsets, equality, the empty set, or the set of all the natural numbers.

A simpler way to see that no computable model of  exists is by a cardinality argument;  is so big that it would be paradoxical for it to have a cardinality, whereas  is countable. However there problem with cardinality arguments is that one can get into weird stuff due to Skolem's paradox. Meanwhile, the observation that set theory would allow you to compute wild stuff and therefore cannot plausibly be computable holds even if you work in a countable setting via Skolem.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-20T18:34:55.719Z · LW · GW

The encoding involves looping over the domain of discourse, which works fine for arithmetic where the domain of discourse are the natural numbers, which are an enumerable set. However, when the domain of discourse is the sets, you can't just loop over them because there are too many with too complex relations which cannot be represented finitely.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-20T13:03:08.338Z · LW · GW

For a mathematical statement to be proven true or false, at a very high level, we can always basically reformulate the mathematical statement and turn it into a program that halts or doesn't halt, or stops at a finite time or continues running forever/infinite time, where the truth value of the statement depends on whether it starts with an existential quantifier or universal quantifier.

This is false. It only works if you don't use too complex a sequence of quantifiers. It is also limited to arithmetic and doesn't work with e.g. set theory.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-20T08:00:26.031Z · LW · GW

Actually, here's something that may be helpful in understanding why the principle of bivalence is distinct from the law of excluded middle:

As I understand you, one of the core points you are making is that you want to be able to entertain incompatible models. So let's say that you have two models  and  that are incompatible with each other.

For simplicity, let's say both models share a language  in which they can express propositions, and assign truth values to statements in that language using functions  mapping statements to truth-values. (For instance, maybe  is a flat-earth approximation, and  is a spherical-earth approximation, so  but .)

Because these are just models, your points don't apply within the models; it might be fine for an approximation to say that everything is true or false, as long as we keep in mind that it's just an approximation and different approximations might lead to different results. As a result, all of the usual principles of logic like bivalence, noncontradiction, excluded middle, etc. apply within the models.

However, outside/between the models, there is a sense that what you are saying applies. For instance we get an apparent contradiction/multiple truth values for  vs . But these truth values live in separate models, so they don't really interact, and therefore aren't really a contradiction.

But you might want to have a combined model where they do interact. We can do this simply by using the -valued approach I mentioned in my prior comment. Define a shared model  by the truth-value function . So for instance .

Here you might interpret  as meaning something along the lines of "true in practice but not in theory" or "true for small things but not for big things", and you might interpret  as meaning something along the lines of "technically true but not in practice" or "true for big things but not for small things". While "The earth is flat" would be an example of a natural statement with the former truth value, "The earth is finite" would be an example of a natural statement with the latter truth value. And then we also have truth values like  meaning "wholly true" and  meaning "wholly false".

 still satisfies standard logical principles like excluded middle, law of noncontradiction, etc.. This is basically because it is just two logics running side-by-side and not interacting much. This also makes it kind of boring, because you could just as well work with  and  separately.

We can make them somewhat less boring/more interacting by adding logical operators to the language that makes the different logics interact. For instance, one operator we might add is a definitely/necessarily/wholly operator, denoted , which is interpreted to mean that  is true in both  and . So for instance  because the earth isn't flat in the spherical-earth model. We can also add a sort of/possibly/partly operator, denoted , which is interpreted to mean that  is true in at least one of  and , so for instance .

These operators allow you to express some of the principles that you have been complaining about. For instance, there is a stronger version of excluded middle  which is sort of an internalized version of the principle of bivalence, and this stronger version of excluded middle is false for some statements such as "The earth is flat". There's also a stronger version of the law of noncontradiction  which again is a sort of internalized notion of the principle of bivalence and is also false for the same statements the other one is false for.

So far so good; this seems to give you exactly what you want, right? A way to entertain multiple models at once. But this requires all of the models to use a shared language, and in my experience, this requirement is actually the killer for all of these things. For instance, Newtonian physics doesn't share a language with Einsteinian physics, as the ontology is fundamentally different. It is nearly impossible to make different logics share a language sufficiently well for this to work in interesting ways. I'd guess that the route to solving this goes through John Wentworth's Natural Abstractions, but they haven't been developed enough yet to solve it.

Overall, there's a bunch of work into this sort of logic, but it isn't useful because the fundamental problem is in how to fluently switch between ontologies, not in how to entertain multiple things at once.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-19T22:20:20.579Z · LW · GW

It's really hard to answer these sorts of questions universally because there's a bunch of ways of setting up things that are strictly speaking different but which yields the same results overall. For instance, some take  to be a primitive notion, whereas I am more used to defining  to mean . However, pretty much always the inference rules or axioms for taking  to be a primitive are set up in such a way that it is equivalent to .

If you define it that way, the law of noncontradiction becomes  is pretty trivial, because it is just a special case of , and if you don't have the rule  then it seems like your logic must be extremely limited (since it's like an internalized version of modus ponens, a fundamental rule of reasoning).

I have a bunch of experience dealing with logic that rejects the law of excluded middle, but while there are a bunch of people who also experiment with rejecting the law of noncontradiction, I haven't seen anything useful come of it, I think because it is quite fundamental to reasoning.

Also, does "A" mean the same thing as "A = True"? Does "~A" mean the same thing as "A = False"?

Statements like  are kind of mixing up the semantic (or "meta") level with the syntactic (or "object") level.

G2g

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-18T22:29:47.233Z · LW · GW

If I were to swap the phrase "law of the excluded middle" in the piece for the phrase "principle of bivalence" how much would the meaning of it change

A lot. At least I associate multi-valued logic with a different sphere of research than intuitionism.

as well as overall correctness?

My impression is that a lot of people have tried to do interesting stuff with multi-valued logic to make it handle the sorts of things you mention, and they haven't made any real progress, so I would be inclined to say that it is a dead-end.

Though arguably objections like "It introduces the concept of “actually true” and “actually false” independent of whether or not we’ve chosen to believe something." also apply to multi-valued logic so idk to what extent this is even the angle you would go on it.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-18T21:36:40.427Z · LW · GW

If there are propositions or axioms that imply each other fairly easily under common contextual assumptions, then I think it's reasonable to consider it not-quite-a-mistake to use the same name for such propositions.

What equivalences do you have in mind when you say "imply each other"?

One of the things I'm arguing is that I'm not convinced that imprecision is enough to render a work "false."

Are you convinced those mistakes are enough to render this piece false or incoherent?

It's certainly true that there is a scientific/logical conundrum about how to deal with imprecision. I know a lot about what happens when you tinker with the law of excluded middle, though, and I am not convinced this has any impact on your ability to deal with imprecision.

Comment by tailcalled on Why I Don't Believe The Law of the Excluded Middle · 2023-09-18T21:07:23.313Z · LW · GW

The Law of the Excluded Middle says that there are only two “truth-values:” namely True and False. It also says that X and not-X is false, while X or not-X is true. If we accept LoEM already, then LoEM is either true or false. However, there are already two reasons I don’t like doing that:

  1. It introduces the concept of “actually true” and “actually false” independent of whether or not we’ve chosen to believe something.
  2. It’s self-referential. So if I believe it, that means I have to believe it really hard. Weakly believing it is a lot like not believing it at all, which I find to be unfair.

You are mixing up the law of excluded middle with two-valued logic.

In logic, one distinguishes between syntax (what statements can you express, and how does one statement follow from another statement) and semantics (what external meaning does a statement have).

The syntactic side is done by having a formal language with symbols such as  and  that can be combined into propositions, and then specifying the rules for how to derive one proposition from another.

The validity of these rules is determined by the semantic side, where each proposition is given a meaning (truth-value). This meaning cannot be accessed from within the language, but only exists on the meta level.

Two-valuedness says that each proposition is given one of two truth-values, which if we were go formal would look something like  where  refers to the truth-value function. On the other hand, law of excluded middle just says ; this isn't even in the same language as the previous one because this is an "internal" thing, like a possible input to .

Two-valuedness is independent of LEM. There are multi-valued logics (i.e. logics where you have more than 2 truth values) which validate the law of excluded middle, e.g. if  is a natural number then you have a -valued logic consisting of length- sequences of booleans, with logical operators applied pointwise. This has basic odd truth values like , but these odd truth values still satisfy LEM because . On the other hand, there are also logics that don't validate LEM but are two-valued, such as realizability logic (which is kind of complicated but basically statements such as  are interpreted to mean "there is a procedure by which you can figure out whether  is true or  is true").

Furthermore, LEM is not self-referential; it just says . One could argue that two-valuedness is self-referential, since it makes reference to , the truth-value function; but  is not accessible within the logic and instead two-valuedness is an external principle imposed.

Here’s a question to conclude this piece with: Is Löb's theorem still true even if we don’t accept LoEM? (My guess is yes.)

Yes. But also the principle of explosion and the principle of noncontradiction are also true even if you don't accept LoEM. Your proof of the principle of explosion didn't use the law of excluded middle at all.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-18T20:45:43.110Z · LW · GW

I plan on doing that.

Comment by tailcalled on Polarization is Not (Standard) Bayesian · 2023-09-18T12:42:36.737Z · LW · GW

Maybe a clearer way to frame it is that I'm objecting to this assumption:

Naturally, he treats the two independently: becoming convinced that abortion is wrong wouldn’t shift his opinions about guns. As a consequence, if he’s a Bayesian then he’s also 50-50 on the biconditional A<–>G.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-17T00:31:58.520Z · LW · GW

This is definitely something I've thought about and have multiple layers of plans to reduce, though my plans are admittedly of questionable strength, so there are pretty legitimate reasons to doubt my idea. I will probably research this some more before writing it up. That said my idea is very different from just handing out "you can trust this person" cards.

Comment by tailcalled on Instrumental Convergence Bounty · 2023-09-16T22:52:39.124Z · LW · GW

I think the correct solution to the strawberry problem would also involve a ton of instrumental convergence? You'd need to collect resources to do research/engineering, then set up systems to experiment with strawberries/biotech, then collect generally applicable information on strawberry duplication, and then apply that to duplicate the strawberry.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T21:26:30.612Z · LW · GW

I probably don't understand the risks! Like I have some similar example institutions in mind, but none that are super similar to what I'm doing or which have consequences as bad as you are implying, so I assume maybe there's a significant history that you are aware of and which I have missed. What examples do you have in mind?

I plan to write my idea in much greater detail later before actually launching it. More information now about the risks could be convenient if it shows how it is a waste of time to pursue, though alternatively if I disagree I might address the disagreements in later writeups.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T21:14:27.946Z · LW · GW

Maybe the downvoters want to point out the risk of this turning into some denunciation/witchhunting/revolution eating her own children/cancel culture scenario.

Could be what they are worried about.

My current model of how witch-hunts/cancel-culture occurs is that when there is no legitimate way to get justice, people sometimes manage to arrange vigilante justice based on social alliances, but that vigilante justice is by its very nature going to be more chaotic and less accurate than a proper system.

So one consequence of my idea, if it works properly, is that it would reduce witchhunts by providing victims with effective means of achieving justice.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T21:10:50.934Z · LW · GW

Ah.

I briefly mentioned this in the discord:

For a while I have been thinking about how one can best come to useful truth with respect to controversial subjects. I have developed a theoretical framework that I need to write up in detail, but for now here's the short version:

  • There is no executive which can deal out punishment or fine to pay for damages, so neither the accuser nor the accused have sufficient motivation to unravel the truth; instead most of the value will have to come from informing the community, and we suffer a commons problem because each community member is not sufficiently motivated.
  • Different parties have different questions that they are most interested in. One of the biggest jobs of the court is to distinguish all of the relevant questions so that things don't get falsely generalized.
  • Most people do not have time to work through the details, so the court needs to provide an easy-to-digest summary.
  • The court needs to dig up novel evidence, because most evidence is not publicly available, and it needs to come up with novel theory of social interactions to understand the significance of the events, making use of rationalist skills.

I'm of course open to input for how to do things differently than this, though you should expect some pushback because I do currently have some theory and observations backing the above strategy, so the discussion will have to clarify the alternatives.

Comment by tailcalled on Polarization is Not (Standard) Bayesian · 2023-09-16T17:47:05.993Z · LW · GW

I think your proof falls apart if you add "learning what the faction's positions are" to the model? Because then the update on the biconditional could occur when learning the faction's positions, rather than violating the martingale property.

Comment by tailcalled on In the Short-Term, Why Couldn't You Just RLHF-out Instrumental Convergence? · 2023-09-16T12:18:20.327Z · LW · GW

LLMs/GPTs get their capabilities not through directly pursuing instrumental convergence, but through mimicking humans who hopefully have pursued instrumental convergence (the whole "stochastic parrot" insight), so it's unclear what "bad instrumental convergence" even looks like in LLMs/GPTs or what it means to erase it.

The closest thing I can see to answer the question is that LLMs sort of function as search engines and you want to prevent bad actors from gaining an advantage with those search engines so you want to censor stuff that is mostly helpful for bad activities.

They seem to have done quite well at that, so it seems basically feasible. Of course LLMs will still ordinarily empower bad actors just as they ordinarily empower everyone, so it's not a full solution.

I don't consider this very significant though as I have a hard time imagining that stochastic parrots will be the full extent of AI forever.

Comment by tailcalled on In the Short-Term, Why Couldn't You Just RLHF-out Instrumental Convergence? · 2023-09-16T11:15:48.085Z · LW · GW

Instrumental convergence is what makes general intelligence possible.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T11:03:47.779Z · LW · GW

Why the downvotes?

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T08:15:17.363Z · LW · GW

I guess I should say, feel encouraged to join both if you want to help making the court or if you have some conflict you want the court to investigate.

Comment by tailcalled on Closing Notes on Nonlinear Investigation · 2023-09-16T08:06:12.007Z · LW · GW

My sense is that there are a good number more injustices and predators in the EA ecosystem, most of which do not look exactly like this case. But it is not my job to uncover them and I am not making it my job. If you want to have an immune system that ferrets out bad behavior, you'll have to take responsibility for building that.

I have been thinking about creating an institution to work on this kind of thing. If anyone reading this is interested in this, please contact me and/or join the following discord server: Rationalist/EA Court

Comment by tailcalled on Core Pathways of Aging · 2023-09-15T10:03:10.811Z · LW · GW

Could prions be another potential root cause? Either contributing to the DNA damage <-> mitochondrial ROS feedback loop, or as a cause of some separate conditionally independent aging factor?

Comment by tailcalled on Consciousness as a conflationary alliance term · 2023-09-14T16:44:21.067Z · LW · GW
  • (n≈2) Consciousness as experiential coherence.   I have a subjective sense that my experience at any moment is a coherent whole, where each part is related or connectable to every other part.  This integration of experience into a coherent whole is consciousness.

I'm not sure I understand this one.

In my current moment, I am sitting on a couch, writing a comment on LessWrong. Over on the floor, there is a pillow shaped like an otter, and I am thinking of going out to buy some food.

A lot of these experiences don't really seem connectable. The otter doesn't have anything to do with the food I am thinking of buying (though... it has a starfish-shaped pillow attached, because otters eat starfish - does that count as connecting?), and neither would have had much to do with LessWrong if not for this comment (does this comment count as connecting?). Nor do they have much to do with the couch. (... though the couch and the otter are made of a similar material, does that count as connecting? And the couch supports me in writing the LessWrong comment, does that count as connecting?)

Admittedly I have just written 4 different ways of connecting the supposedly-unconnected experiences, but these connections don't really intuitively seem related to consciousness to me, so I wonder if you really mean something else with "experiential coherence".

Comment by tailcalled on Aumann-agreement is common · 2023-09-11T10:21:26.327Z · LW · GW

https://www.lesswrong.com/posts/ybKP6e5K7e2o7dSgP/don-t-get-distracted-by-the-boilerplate

Comment by tailcalled on Aumann-agreement is common · 2023-09-10T08:56:42.266Z · LW · GW

I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don't work. However I don't know what this application is, and so I can't grant you that it holds or give my analysis of it.

If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.