misc raw responses to a tract of Critical Rationalism
post by mako yass (MakoYass) · 2020-08-14T11:53:10.634Z · LW · GW · 52 commentsContents
52 comments
Written in response to this David Deutsch presentation. Hoping it will be comprehensible enough to the friend it was written for to be responded to, and maybe a few other people too.
Deutsch says things like "theories don't have probabilities", ("there's no such thing as the probability of it") (content warning: every bayesian who watches the following two minutes will hate it)
I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers. In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;
I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem, I've mentioned this before, I think they attribute too much of the inductive process to blind selection and evolution, and underrecognise the major accelerants of that that we've developed, the extraordinarily sophisticated, to extend a metaphor, managed mutation, sexual reproduction, and to depart from the metaphor, conscious, judicious, uncertain but principled design, that the discursive subjects engage in, that is now primarily driving it.
He generally seems to have missed some sort of developmental window for learning bayesian metaphysics or something, the reason he thinks it doesn't work is that he visibly hasn't tied together a complete sense of the way it's supposed to. Can he please study the solomonoff inductor and think more about how priors fade away as evidence comes in, and about the inherent subjectivity a person's judgements must necessarily have as a consequence of their knowing different subsets of the evidencebase, and how there is no alternative to that. He is reaching towards a kind of objectivity about probabilities that finite beings cannot attain.
His discussion of the alignment problem defies essential decision theory, he thinks that values are like tools, that they can weaken their holders if they are in some sense 'incorrect'. That Right Makes Might. Essentially Landian worship of Omuhundro's Monster from a more optimistic angle, that the monster who rises at the end of a long descent into value drift will resemble a liberal society that we would want to build.
Despite this, his conclusion that a correct alignment process must have a value learning stage agrees with what the people who have internalised decision theory are generally trying to do (Stuart Russel's moral uncertainty and active value learning, MIRI's CEV process). I'm not sure who this is all for! Maybe it's just a point for his own students? Or for governments and their defense technology programmes, who may be thinking not enough, but when they do think, they would tend to prefer to think in terms of national character, and liberal progress? So, might that be why we need Deutsch? To speak of cosmopolitan, self-correcting approaches to AGI alignment in those fairly ill-suited terms, for the benefit of powers who will not see it in the terms of an engineering problem?
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire. I prefer schools that don't. But I've never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don't. They seem too messy to prove correctness of.
Error: Prediction that humans may have time to integrate AGI-inspired mental augmentation horse exoskeletons in the short span of time between the creation of AGI and its accidental release and ascension. Neuralink will be useful, but not for that. We are stones milling about at the base of what we should infer to be a great mountain of increasing capability, and as soon as we learn to make an agent that can climb the mountain at all it will strengthen beyond our ken long before we can begin to figure out where to even plug our prototype cognitive orthotics in.
I think quite a lot of this might be a reaction to illiberal readings of Bostrom's Black Ball paper (he references it pretty clearly)... I don't know if anyone has outwardly posed such readings. Bostrom doesn't really seem eager to go there and wrestle with the governance implications himself? (one such implication: a transparent society of mass surveillance [LW · GW]. Another: The period of the long reflection, a calm period of relative stasis), but it's understandable that Deutsch would want to engage it anyway even if nobody's vocalizing it, it's definitely a response that is lurking there.
The point about how a complete cessation of the emergence of new extinction risks would be much less beautiful than an infinite but finitely convergently decreasing series of risks, is interesting.. I'm not convinced that those societies are going to turn out to look all that different in practice..? But I'll try to carry it with me.
52 comments
Comments sorted by top scores.
comment by curi · 2020-08-22T17:18:14.796Z · LW(p) · GW(p)
Hi, Deutsch was my mentor. I run the discussion forums where we've been continuously open to debate and questions since before LW existed. I'm also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I've been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you're interested, have you read FoR and BoI?
I'll begin with one comment now:
I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories
~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you've talked with or read.
But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can't 100% count on ideas. Wholely believing things is not a part of CR.
If by "wholely" you mean with a 100% probability, that is also not a CR position, since CR doesn't assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say "0% or infinitesimal" (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.
Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I'm acting 100% on that plan and not following either original plan. So I'm still picking a single plan to wholely act on.)
Replies from: MakoYass, ESRogs↑ comment by mako yass (MakoYass) · 2020-08-23T04:47:56.371Z · LW(p) · GW(p)
I'm glad to hear from you.
I had an interesting discussion about induction with my (critrat) friend Ella Hoeppner recently, I think we arrived at some things...
I think it was...
I stumbled on some quotes of DD (from this, which I should read in full at some point) criticizing, competently, the principle of induction (which is, roughly, "what was, will continue"). My stance is that it is indeed underspecified, but that Solomonoff induction pretty much provides the rest of the specification. Ella's response to solomonoff induction was "but it too is underspecified, because the programming language that it uses is arbitrary", I replied with "every language has a constant-sized interpreter specification so in the large they all end up giving values of similar sizes", but I don't really know how to back up there being some sort of reasonable upper bound to interpreter sizes, then we ran into the fact that there is no ultimate metaphysical foundation for semantics, why are we grounding semantics on a thing like turing machines? I just don't know. The most meta metalanguage always ends up being english, or worse, demonstration; show the learner some examples and they will figure out the rules without using any language at all, and people always seem reliant on receiving demonstrations at some point in their education.
I think I left it at... it's easy for us to point at the category of languages are 'computerlike' and easy to implement with simple things like transistors, that is, for some reason, what we use as a bedrock. We just will. Maybe there is nothing below there. I can't see why we should expect there to be. We will just use what works.
Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations.
I suppose we could call that "theorizing", if the need were strong.
I've heard from some critrats, "there is no such thing as inductive cognition, it's just evolution", (lent a small bit of support by DD quotes like "why is it still conventional wisdom that we get our theories by induction?" (the answer may be; because "induction" is sometimes defined to be whatever kind of thing theories come out of)), if they mean it the way I understood: If evolution performs the role of an inductive cognition, then evolution is an inductive cognition (collectively), there is such a thing as evolution, so there is such a thing as inductive cognition.
(I then name induction-techne, the process of coming up with theories that are useful not just for predicting the phenomena, but for *manipulating the phenomena*, It is elicited by puzzle games like The Witness (recommended), and the games Ella and I are working on, after which we might name their genre, "induction games"? (the "techne" is somewhat implied by "game"'s suggestion of interactivity)).
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?
I am evidently interested in discussing it, but I am probably not the best person for it. My background in math is not strong enough for me to really contribute to analytic epistemology, so my knowledge of bayesian epistemology is all a bit impressionistic. I had far too much difficulty citing examples of concrete applications of the bayesian approach. I can probably find more, but it takes conscious effort, there must be people for whom it doesn't.
I have not read those books, I'm definitely considering it, they sound pretty good.
I think it might be very fruitful if David got together with Judea Pearl, though, who seems to me to be the foremost developer of bayesian causal reasoning. It looks like they might not have met before and they seem to have similarly playful approaches to language and epistemology that makes me wonder if they might get along.
Sometimes we act, judge, decide or (tentatively) conclude
Aye, the tragedy of agency. If only we could delay acting until after we've figured everything out, it would solve so many problems.
Replies from: curi, TAG↑ comment by curi · 2020-08-23T16:33:19.499Z · LW(p) · GW(p)
A place to start is considering what problems we're trying to solve.
Epistemology has problems like:
What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?
Are those the sorts of problems you're trying to solve when you talk about Solomonoff induction? If so, what's the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)
(My questions are open to anyone else, too.)
↑ comment by TAG · 2020-08-24T08:23:33.098Z · LW(p) · GW(p)
ultimate metaphysical foundation for semantics
It's worse than that: SI doesn't even try to build a meaningful ontological model.
I’ve heard from some critrats, “there is no such thing as inductive cognition, it’s just evolution”,
Why can't it be both?
Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations
So the first definition is what? A mysterious process where the purely passive reception of sense data leads to hypothesis formation.
The critrat world has eloquent arguments against that version of induction, although no one has believed in a for a long time.
the answer may be; because “induction” is sometimes defined to be whatever kind of thing theories come out of)
Well, only sometimes.
CR doesn't have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do. And it doesnt have much motivation to distinguish them. Being sweepingly anti inductive is their thing. They believe that they believe they hold all beliefs tentatively..but that doesn't include the anti inductive belief.
Replies from: max-kaye, MakoYass↑ comment by Max Kaye (max-kaye) · 2020-08-25T12:53:23.313Z · LW(p) · GW(p)
CR doesn't have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.
This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...
The refutations of that kind of induction are way beyond the bounds of CR.
Replies from: TAG↑ comment by TAG · 2020-08-25T16:29:54.466Z · LW(p) · GW(p)
Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn't listen to him!
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-26T01:00:07.491Z · LW(p) · GW(p)
> Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn't listen to him!
I don't think you're taking this seriously.
↑ comment by mako yass (MakoYass) · 2020-08-24T13:40:52.814Z · LW(p) · GW(p)
It's worse than that: SI doesn't even try to build a meaningful ontological model.
Hm, does it need one?
Why can't it be both?
I think that's what I said.
So the first definition is what?
Again, "what was, will continue". DD says something about real years never having started with 20 therefore the year 2000 wont happen, which seems to refute it as a complete specification, but on reflection I just feel like he understood it in an overly crude way because he wasn't thinking in a probabilistic way about managing the coexistence of competing theories that agree with past data but make different predictions about the future, and he still probably doesn't have that.
The reality is, you actually aren't supposed to have certainty that the year 2000 will happen, 0 and 1 are not real probabilities etc
Replies from: TAG, TAG↑ comment by TAG · 2020-08-24T16:10:25.868Z · LW(p) · GW(p)
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
Hm, does it need one
Yes, if you are going to claim that it solves the problem of attaching objective probabilities to ontological theories..or theories for short. If what it actually delivers is complexity measures on computer programs, it would be honest to say so.
comment by Max Kaye (max-kaye) · 2020-08-16T08:02:33.032Z · LW(p) · GW(p)
[...] critrats [...] let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem
As someone who thinks you'd think they're a 'critrat', this feels wrong to me. I can't speak for other CR ppl, ofc, and some CR ppl aren't good at it (like any epistemology), but for me I don't think what you describe would add up to "a productive intellectual ecosystem".
comment by Max Kaye (max-kaye) · 2020-08-16T07:50:35.241Z · LW(p) · GW(p)
I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers.
I suspect, for DD, it's not about *how many* but *all*. If I come up with 10 reasons Bayesianism is wrong (so 10 criticisms), and 9 of those get answered adequately, the 1 that's still left is as bad as the 10; *any* unanswered criticism is a reason not to believe an idea. So to convince DD (or any decent Popperian) that an idea is wrong can't rely on incomplete rebuttals, an idea needs to be *uncriticised* (answered criticisms don't count here, though those answers could be criticised; that entire chain can be long and all of it needs to be resolved). There are also ideas answering questions like "what happens when you get to an 'I don't know' point?" or "what happens with two competing ideas, both of which are uncriticised?"
Clarifying point: some ideas (like MWI, string theory, etc) are very difficult to criticise by showing a contradiction with evidence, but the fact 2 competing ideas exist means they're either compatible in a way we don't realise or they offer some criticisms of each other, even if we can't easily judge the quality of those criticisms at the time.
Note: I'm not a Bayesian; DD's book *The Beginning of Infinity* convinced me that Popper's foundation for epistemology (including the ideas built on top of it / improved it) was better in a decisive way.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-08-16T08:52:02.595Z · LW(p) · GW(p)
Note: I'm not a Bayesian; DD's book *The Beginning of Infinity* convinced me that Popper's foundation for epistemology (including the ideas built on top of it / improved it) was better in a decisive way
In what way are the epistemologies actually in conflict?
My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is.
Bayes does not disagree with true things, nor does it disagree with useful rules of thumb. Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-22T17:57:08.926Z · LW(p) · GW(p)
In what way are the epistemologies actually in conflict?
Well, they disagree on how to judge ideas, and why ideas are okay to treat as 'true' or not.
There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.
My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is.
I have a sense of something like this, too, both in the way LW and CR "read" each other, and in the more practical sense of agreement in the outcome of many applications.
I do still think there is a substantive disagreement, though. I also think DD is one of the best thinkers wrt CR and broadly endorse ~everything in BoI (there are a few caveats, a typo and improvements to how-to-vary, at least; I'll mention if more come up. The yes/no stuff I mentioned in another post is an example of one of these caveats). I mention endorsing BoI b/c if you wanted to quote something from BoI it's highly likely I wouldn't have an issue with it (so is a good source of things for critical discussion).
Bayes does not disagree with true things, nor does it disagree with useful rules of thumb.
CR agrees here, though there is a good explanation of "rules of thumb" in BoI that covers how, when, and why rules of thumb can be dangerous and/or wrong.
Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.
This might be a good way try to find disagreements between BE (Bayesian Epistemology) and CR in more detail. It also tests my understanding of CR (and maybe a bit of BE too).
I've given some details on the sorts of principles in CR in my replies^1, if you'd like to try this do you have any ideas on where to go next? I'm happy to provide more detail with some prompting about the things you take issue with or you think need more explanation / answering criticisms.
[1]: or, at least my sub-school of thought; some of the things I've said are actually controversial within CR, but I'm not sure they'll be significant.
comment by TAG · 2020-08-15T13:19:33.384Z · LW(p) · GW(p)
I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness
I get that sense about them and about LWrats, too.
I think it’s fairly clear from this that he doesn’t have solomonoff induction internalized
Solomonoff induction isn't that useful. Apart from the computability issues, , it's "theories" aren't what we normally refer to as theories.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-08-16T04:53:46.171Z · LW(p) · GW(p)
Mm I feel like there's a distinction, but I'm not sure how to articulate it... maybe... LWrats fear being wrong a bit more, while still going on to be boldly excitingly wrong as often as they can get away with, which tends to look a lot more neurotic.
Solomonoff induction isn't that useful
Solomonoff induction is, as far as I'm aware, the most precise extant way of understanding what occam's razor is, what "simplest theory" actually means. If we are not using it every day I think maybe we are not trying hard enough.
Replies from: TAG↑ comment by TAG · 2020-08-16T09:30:53.983Z · LW(p) · GW(p)
Not actually being useful is a fatal flaw. "Here is the perfect way of doing science. First you need an infinite brain..".
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-08-17T12:15:07.797Z · LW(p) · GW(p)
Ah, sorry, I was definitely unclear. I meant "using the concept of solomonoff induction (to reiterate; as a way of understanding what occam's razor is) and its derivatives every day to clarify our principles in philosophy of science" rather than "using solomonoff induction the process itself as specified to compute exact probabilities for theories", no one should try to do the latter, it will give you a headache
Replies from: TAGcomment by Max Kaye (max-kaye) · 2020-08-22T18:37:02.134Z · LW(p) · GW(p)
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire.
I think he'd say 'yes' to a distinction between morality and desire, at least in the way I'm reading this sentence. My comment: Moral statements are part of epistemology and not dependent on humans or local stuff. However, as one learns more about morality and considers their own actions, their preferences progressively change to be increasingly compatible with their morality.
Being a fallibilist I think he'd add something like or roughly agree with: the desire to be moral doesn't mean all our actions become moral, we're fallible and make mistakes, so sometimes we think we're doing something moral that turns out not to be (at which point we have some criticism for our behaviour and ways to improve it).
(I'm hedging my statements here b/c I don't want to put words in DD's mouth; these are my guesses)
I prefer schools that don't.
Wouldn't that just be like hedonism or something like that? I'm not sure what would be better about a school that doesn't.
But I've never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don't. They seem too messy to prove correctness of.
Why is the definition of values and the addition of "moral" not enough?
Definitions (from google):
[moral] values: [moral] principles or standards of behaviour; one's judgement of what is important in life.
principle: a fundamental truth or proposition that serves as the foundation for a system of belief or behaviour or for a chain of reasoning.
I'd argue for a slightly softer definition of principle, particularly it should account for: moral values and principles can be conclusions, they don't have to be taken as axiomatic, however, they are *general* and apply universally (or near-universally).
They seem too messy to prove correctness of.
Sure, but we can still learn things about them, and we can still reason about whether they're wrong or right.
Here's a relevant extract from BoI (about 20% through the book, in ch5 - there's a fair amount of presumed reading at this point)
In the case of moral philosophy, the empiricist and justificationist misconceptions are often expressed
in the maxim that ‘you can’t derive an ought from an is’ (a paraphrase of a remark by the Enlightenment philosopher David Hume). It means that moral theories cannot be deduced from factual knowledge. This has become conventional wisdom, and has resulted in a kind of dogmatic despair about morality: ‘you can’t derive an ought from an is, therefore morality cannot be justified by reason’. That leaves only two options: either to embrace unreason or to try living without ever making a moral judgement. Both are liable to lead to morally wrong choices, just as embracing unreason or never attempting to explain the physical world leads to factually false theories (and not just ignorance).
Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not. Thus factual knowledge can be useful in criticizing moral explanations.
For example, in the nineteenth century, if an American slave had written a bestselling book, that event would not logically have ruled out the proposition ‘Negroes are intended by Providence to be slaves.’ No experience could, because that is a philosophical theory. But it might have ruined the explanation through which many people understood that proposition. And if, as a result, such people had found themselves unable to explain to their own satisfaction why it would be Providential if that author were to be forced back into slavery, then they might have questioned the account that they had formerly accepted of what a black person really is, and what a person in general is – and then a good person, a good society, and so on.
comment by Max Kaye (max-kaye) · 2020-08-16T08:00:15.114Z · LW(p) · GW(p)
In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;
I'm not sure what this would look like in practice. If you have two competing theories and don't need to act on them - there's no issue. If they're not mutually exclusive there's no issue. If the *specific* action that might be taken is the same regardless of the theories there's no issue. So it seems like the crux must be around multiple competing, mutually exclusive theories which we need to act on.
In the crux case there are ways to deal with it so that you only *have* to act on one. Which method to use depends on goals and time constraints. Some acceptable ideas: choose the one that is more quickly disproved or the one that does less damage if wrong, choose to investigate the difference between the two (like do research to find a criticism of one of them, ideally in a way that is focused on the intersection as opposed to researching something that could only affect one theory).
I think to convince a Popperian that their ideas are incomplete you need to find an example of where the Bayesian method can deal with problems CR can't.
A note on the use of 'bayesian': sorry if this isn't the right term btw. If it's not I hope you know what I mean when I say it.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-08-16T09:35:22.013Z · LW(p) · GW(p)
Some acceptable ideas: choose the one that is more quickly disproved or the one that does less damage if wrong
Maximizing expected utility does these things in a very simple way to the exact extent that it should.
Hmm...
My first impulse was to say "bayes is not a method. It is a low-level language for epistemology. Methods emerge higher in the abstraction stack. Its fandom uses just whatever methods work."
But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
Is the bayesian method... trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that. His understanding of AGI is utterly anthropomorphic, and is not informed by decision theory; it is not informed by the study of reliable, deeply comprehensible essences of things, and so it will not come into play very much in the adjacent discipline of engineering.
I guess... to get closer to understanding what the bayesian methodology might be uniquely good at... I'll have to reexamine some original reasoning that I have done with it [LW · GW]... so... understanding things in terms of decision lets me identify concepts that are basic and necessary for consistent decisionmaking (paraphrasings of consistent decisionmaking: for being free and agentic and not self-defeating or easily tricked). Which let me narrow in on just the aspects of the hard problem of consciousness that must be, in some sense, real. Which lead me to conclusions like "fish aren't important moral subjects, because even though they're clearly capable of suffering, experiences have magnitude, and theirs must be negligible, for it to be other than negligible, something astronomically unlikely would have needed to have happened, so it basically must be." Which means I get to be more of a pescaterian than a vegan, which is a very immediately useful realization to have arrived at.
If that argument doesn't make sense to you, well that might mean that we've just identified something that bayesian/decision theoretic reasoning can do, that can't be done without it.
I would be interested to know how Mirror Chamber strikes you though, I haven't tried to get non-bayesians to read it.
Replies from: max-kaye, max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-16T20:36:24.275Z · LW(p) · GW(p)
But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
I don't know how you'd describe Bayesianism atm but I'll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.
- both CR and Bayesianism answer Qs about knowledge and judging knowledge; they're incompatible b/c they make incompatible claims about the world but overlap.
- CR says that truth is objective
- explanations are the foundation of knowledge, and it's from explanations that we gain predictive power
- no knowledge is derived from the past; that's an illusion b/c we're already using per-existing explanations as foundations
- new knowledge can be created to explain things about the past we didn't understand, but that's new knowledge in the same way the original explanation was once new knowledge
- e.g. axial tilt theory of seasons; no amount of past experience helped understand what's *really* happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
- when we have two explanations for a single phenomena they're either the same, both wrong, or one is "right"
- "right" is different from "true" - this is where fallibilism comes in (note: I don't think you can talk about CR without talking about fallibilism; broadly they're synonyms)
- taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we'll discover more and more better explanations about the universe to explain it
- this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
- DD suggests in *the beginning of infinity* that we should rename scientific theories scientific "misconceptions" because that's more accurate
- anyone can be mistaken on anything
- there are rational ways to choose *exactly one* explanation (or zero if none hold up)
- if we have a reason that some explanation is false, then there is no amount of "support" which makes it less likely to be false. (this is what is meant by 'criticism'). no objectively true thing has an objectively true reason that it's false.
- so we should believe only those things for which there are no unanswered criticisms
- this is why some CR ppl are insistent on finishing and concluding discussions - if two people disagree then one must have knowledge of why the other is wrong, or they're both wrong (or both don't know enough, etc)
- to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, *or* it's to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
- there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn't explain why it happens so much.
that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.
Is the bayesian method... trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that.
Unless it's maths/decision theory related, that's right. CR/Fallibilism is more about reasoning; like an internal-contradiction means an idea is wrong; there's 0 probability it's correct. Maybe someone alters the idea so it doesn't have a contradiction which means it needs to be judged again.
His understanding of AGI is utterly anthropomorphic
I don't think that's the case. I think his understanding/theories of AGI don't have anything to do with humans (besides that we'd create one - excluding aliens showing up or whatever). There's a separate explanation for why AGI isn't going to arise randomly e.g. out of a genetic ML algorithm.
If that argument doesn't make sense to you, well that might mean that we've just identified something that bayesian/decision theoretic reasoning can do, that can't be done without it.
Well, we don't agree about fish, but whether it makes sense or not depends on your meaning. If you mean that I understand your reasoning, I think I do. If you mean that I think the reasoning is okay, maybe from your principles but I don't think it's *right*. Like I think there are issues with it such that the explanation and conclusion shouldn't be used.
ps: I realize that's a lot of text to dump all at once, sorry about that. Maybe it's a good idea to focus on one thing?
Replies from: MakoYass, MakoYass, MakoYass↑ comment by mako yass (MakoYass) · 2020-08-21T06:47:23.258Z · LW(p) · GW(p)
Well, we don't agree about fish [...] I don't think it's *right*
Understanding what you mean by "right", I think I might agree; it's not complete, it's not especially close to certainty.
It's difficult to apply the mirror chamber's reduction of anthropic measure across different species (it was only necessitated for comparing over a pair of very similar experiences), and I'm not sure the biomass difference between fishbrain and humanbrain is such that anthropics can be used either, meaning... well, we can conclude, from the amount of rock in the universe, and the tiny amount of humans in the universe, and our being humans instead of rock, that it is astronomically unlikely that anthropic measure binds in significant quantities to rock. If it did, we would almost certainly have woken up in a different sort of place. But for fish, perhaps the numbers are not large enough for us to draw a similar conclusion. (Again, I'm realizing the validity of that sort of argument doesn't clearly entail from the mirror chamber, though I think it is suggested by it)
I think my real reasons for going with pescatarianism are being fed into from other sources, here. It's not just the anthropic measure thing. Also receiving a strong push from my friends in neuroscience who claim that the neurology of fish is just way too simple to be given a lot of experiential weight, in the same way that a thermostat is too simple for us to think anything is suffering when ... [reexamines the assumptions]...
Hmm. I no longer believe their reasoning there (I should talk to them again I guess). I have seen too many bastards say "but that's merely a machine so it couldn't have conscious experience" of systems that probably would have conscious experience, and here they are saying that a biological reinforcement learning system that observably learns from painful experience could not truly suffer. It's not clear that there's a difference between that and suffering. I think fish suffer. The quantity must be small, but this is not enough to conclude that it's negligible.
(... qualia == the class of observations upon which indexical claims can be conditioned?? (I think I'm going to have to write this up properly and do a post))
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-22T08:29:21.096Z · LW(p) · GW(p)
On the note of *qualia* (providing in case it helps)
DD says this in BoI when he first uses the word:
Intelligence in the general-purpose sense that Turing meant is one of a constellation of attributes of the human mind that have been puzzling philosophers for millennia; others include consciousness, free will, and meaning. A typical such puzzle is that of qualia (singular quale, which rhymes with ‘baalay’) – meaning the subjective aspect of sensations. So for instance the sensation of seeing the colour blue is a quale. Consider the following thought experiment. You are a biochemist with the misfortune to have been born with a genetic defect that disables the blue receptors in your retinas. Consequently you have a form of colour blindness in which you are able to see only red and green, and mixtures of the two such as yellow, but anything purely blue also looks to you like one of those mixtures. Then you discover a cure that will cause your blue receptors to start working. Before administering the cure to yourself, you can confidently make certain predictions about what will happen if it works. One of them is that, when you hold up a blue card as a test, you will see a colour that you have never seen before. You can predict that you will call it ‘blue’, because you already know what the colour of the card is called (and can already check which colour it is with a spectrophotometer). You can also predict that when you first see a clear daytime sky after being cured you will experience a similar quale to that of seeing the blue card. But there is one thing that neither you nor anyone else could predict about the outcome of this experiment, and that is: what blue will look like. Qualia are currently neither describable nor predictable – a unique property that should make them deeply problematic to anyone with a scientific world view (though, in the event, it seems to be mainly philosophers who worry about it).
and under "terminology" at the end of the chapter:
Quale (plural qualia) The subjective aspect of a sensation.
This is in Ch7 which is about AGI.
↑ comment by mako yass (MakoYass) · 2020-08-21T05:24:43.660Z · LW(p) · GW(p)
CR says that truth is objective
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
(For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can't share data (solopsistic doubt), we can never get more data (to "enter" a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can't be attained.)
It's good to believe that there's an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper's can't give you the framework you need to operate gracefully without ever getting objective truth.
We sometimes talk about aumann's agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it's really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means "wrong" is no longer a meaningful word. Do you think you can operate without having a word like "wrong"? Do you think you can operate without that concept? I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian's head and force them to make a bet against a single outcome, they're always supposed to be able to. It's just, there are often reasons not to.
we should believe only those things for which there are no unanswered criticisms
Why believe anything! There's a sense in which a bayesian doesn't have any beliefs, especially beliefs with unanswered criticisms. The great thing is that you don't need to have beliefs to methodically do your best to optimize expected utility. You can operate well amid uncertainty. For instance, I can recommend that you take vitamin D supplements just in case the joe rogan interview that the youtube algorithm served me yesterday about how vitamin D is crucial for the respiratory immune system and the covid severity rates differ enormously depending on it was true. I don't need to confirm that it's true by trying to assess primary evidence, I don't need to, in every sense, "believe", because vitamin D is cheap and you should probably be taking it anyway for other reasons, and I have other stuff that I need to be reading right now.
In conclusion I don't see many substantial epistemological differences. I've heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn't need to be "raised" into a value system by a big open society, that a thing with 'wrong values' couldn't participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it's not obvious to me how those claims bear at on bayesian epistemology.
Maybe something to do with the ease with which people who like decision theory can conceive of and describe of very fast-growing non-human-aligned agents? While DD would claim that decision theory's superintelligences are unrealistic to the point of inapplicability, and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
Replies from: ChristianKl, max-kaye↑ comment by ChristianKl · 2020-08-28T23:36:34.205Z · LW(p) · GW(p)
In conclusion I don't see many substantial epistemological differences.
A Bayesian does have beliefs about the probability of various outcomes even if there are unanswered criticisms involved. Generally, the idea is that people examine criticisms more because they believe that the opportunity cost to answer the criticism is woth it and not just because they are unanswered.
↑ comment by Max Kaye (max-kaye) · 2020-08-22T09:49:24.364Z · LW(p) · GW(p)
> CR says that truth is objective
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It's good to believe that there's an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can't happen (the argument for why is in BoI - the beginning of infinity).
We sometimes talk about aumann's agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything.
I think this is true of any two *rational* people with sufficient knowledge, and it's rationality not bayesians that's important. If two partially *irrational* bayesians talk, then there's no reason to think they'd reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don't agree on ~everything (but can get back to that state by talking more).
WRT "sufficient knowledge": the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
> taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means "wrong" is no longer a meaningful word. Do you think you can operate without having a word like "wrong"? Do you think you can operate without that concept?
If it were meaningless I wouldn't have had to add "in an absolute sense". Just because an explanation is wrong in an *absolute* sense (i.e. it doesn't perfectly match reality) does not mean it's not *useful*. Fallibilism generally says it's okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something "true" (practically speaking) has improved - I think. Particularly:
- Knowledge exists relative to *problems*
- Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
- Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
- something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
- note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don't).
I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
I think he's in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don't think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I'm making)
> there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian's head and force them to make a bet against a single outcome, they're always supposed to be able to. It's just, there are often reasons not to.
I think you misunderstand me.
let's say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don't normally list. An example is that the pet doesn't kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn't a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn't cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we're not turning every solution into a single unit (e.g. your 'happiness index'); we're providing *decisive reasons* for why an option should or shouldn't be included. We've also been using this term "happy" but it's more than just that, it's got other important things in there -- the important thing, though, is that it's your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there's no gun to anyone's head, but we can continue to refine down to a list of exactly **one** option (or zero). let's say you wanted an animal you could easily play with -> then rabbit,mouse are excluded, so we have options: cat,dog. If you'd prefer an animal that wasn't a predator - both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you're down to one. Let's say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you'd prefer.
Note: for most things we don't go to this level of detail b/c we don't need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it's not good, then you've added a new goal (if you weren't originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I've just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Argument · Yes or No Philosophy, Curiosity – Rejecting Gradations of Certainty, Curiosity – Critical Rationalism Epistemology Explanations, Curiosity – Critical Preferences and Strong Arguments, Curiosity – Rationally Resolving Conflicts of Ideas, Curiosity – Explaining Popper on Fallible Scientific Knowledge, Curiosity – Yes or No Philosophy Discussion with Andrew Crawshaw
Note 3: During the link-finding exercise I found this: "All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy." (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I'm curious about how a bayesian would tackle that problem. Do you just stop somewhere and say "the cat has a higher probability so we'll go with that?" Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it's possible to *always* do it for *all* problems? If that's the case there would be a way to decisively reach a single answer - so no need for probability. (There's always the edge case there was a mistake somewhere, but I don't think there's a meaningful answer to problems like "P(a mistake in a particular chain of reasoning)" or "P(the impact of a mistake is that the solution we came to changes)" -- note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
Why believe anything!
So we can make decisions.
The great thing is that you don't need to have beliefs to methodically do your best to optimize expected utility
Yes you do - you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don't know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
You can operate well amid uncertainty
Yes, I additionally claim we can operate **decisively**.
In conclusion I don't see many substantial epistemological differences.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they'll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I've heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn't need to be "raised" into a value system by a big open society, that a thing with 'wrong values' couldn't participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it's not obvious to me how those claims bear at on bayesian epistemology.
I don't know why they would be risible -- nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn't going to turn all matter into paperclips. They're important because they refute big parts of theories from thinkers like Bostrom. That's important because time, money, and effort are being spent in the course of taking Bostrom's theories seriously, even though we have good reasons they're not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That's a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it's silly, then you're either irrational or you have a good, robust reason it's not true.
[...] and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
He doesn't claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn't presume it needs to be raised like a human child or take the same resources/attention/etc.
Have you read much of BoI?
Replies from: MakoYass, TAG↑ comment by mako yass (MakoYass) · 2020-08-30T02:50:35.186Z · LW(p) · GW(p)
pets
I think you'd probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I'm not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn't at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn't get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I'm not sure where that resolution came from.
What I'm getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn't seem interesting to me at all.
A reasonable person's approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren't solvable, or reducible to optimal methods.
belief
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It's probably worth mentioning that even mathematical claims aren't beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it's important that we're able to notice and revert our position when that happens.
risable
My impression from observed usage was that "risable" meant "spurious, inspiring anger". Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I've never explicitly looked up the meaning of, but that doesn't seem ideal. I'm starting to wonder if dictionaries are the problem. Maybe there aren't supposed to be dictionaries. Maybe there's something very unnatural about them and they're preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English's unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it's a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
I haven't read BoI. I've been thinking about it.
↑ comment by TAG · 2020-08-22T10:19:26.806Z · LW(p) · GW(p)
Most fraught ideas are mutually refuted...A can be refuted assuming B, B can be refuted using A.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-22T17:46:41.527Z · LW(p) · GW(p)
It's not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn't offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn't make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
And it's always possible both are wrong, anyway.
Replies from: TAG↑ comment by TAG · 2020-08-22T21:21:59.417Z · LW(p) · GW(p)
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
And it’s always possible both are wrong, anyway
Kind of, but "everything is wrong" is vulgar scepticism.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-22T21:55:27.319Z · LW(p) · GW(p)
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux.
Do you have an example? I can't think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they're contrived)
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn't necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don't think Popper was wrong but I'm also not sure it's necessary to discuss that now if we disagree; just wanted to mention)
> And it’s always possible both are wrong, anyway
Kind of, but "everything is wrong" is vulgar scepticism.
I'm not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn't mention in the other paragraphs, and it's a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don't have to worry about them anymore if we can't answer that criticism)
Replies from: TAG↑ comment by TAG · 2020-08-24T14:54:43.130Z · LW(p) · GW(p)
I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don't see how that is an example, principally because it seems wrong to me.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-25T00:36:05.097Z · LW(p) · GW(p)
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I'm not sure why that's particularly relevant, though.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don't see how that is an example, principally because it seems wrong to me.
You didn't quote an example - I'm unsure if you meant to quote a different part?
In any case, what you've quoted isn't an example, and you don't explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it's not soluble with other methods?
I'm also not sure why this is particularly relevant.
Are we still talking about the below?
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I did give you an example (one of Popper's arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person's criticism does not necessarily relate to that person's epistemology, and vice versa.
Replies from: TAG↑ comment by TAG · 2020-08-25T07:14:13.729Z · LW(p) · GW(p)
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
The relevance is that CR can't guarantee that any given dispute is resolveable.
Do you have a concrete example? I did give you an example (one of Popper’s arguments against inductivism)
But I don't count it as an example, since I don't regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So "induction must be based on bivalent logic" is an assumption.
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-25T12:47:05.827Z · LW(p) · GW(p)
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it's an explanatory conclusion.
I'm not convinced we can get anywhere productive continuing this discussion. If you don't think contradictions are bad, it feels like there's going to be a lot of work finding common ground.
But I don't count it as an example, since I don't regard it as correct [...]
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that's required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn't matter in this case. We don't need to argue this point anyway, since you provided an example:
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So "induction must be based on bivalent logic" is an assumption.
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper's argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn't rely on whether inductivism is right or not, just whether it's consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn't mean that CR is wrong, or that Inductivism is wrong; it just means Popper's criticism was wrong.
Curiously, you say:
But I don't count it as an example, since I don't regard it as correct,
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
Replies from: TAG, TAG↑ comment by TAG · 2020-08-25T16:33:57.616Z · LW(p) · GW(p)
Reality does not contradict itself
Firstly, epistemology goes first. You don't know anything about reality without having the means to acquire knowledge. Secondly, I didn't say it was the PNC was actually false.
This is irrational. Examples of relationships do not depend on whether the example is real or not
So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions.
Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-26T01:20:42.871Z · LW(p) · GW(p)
So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn't claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there's been some significant miscommunications. If something's unclear to you, you can quote it to point it out.
[1]: for clarity, the argument in Q: A proof of the impossibility of inductive probability.
> Reality does not contradict itself
Firstly, epistemology goes first. You don't know anything about reality without having the means to acquire knowledge.
Inductivism is not compatible with this - it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn't really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it's a conclusion, and offered an explanation (which you've ignored). In fact, through this discussion you haven't - as far as I can see - actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
Secondly, I didn't say it was the PNC was actually false.
I don't think there's any point talking about this, then. We haven't had any meaningful discussion about it and I don't see why we would.
Replies from: TAG↑ comment by TAG · 2020-08-27T16:02:58.704Z · LW(p) · GW(p)
I didn’t claim that paper made no assumptions. I claimed that refuting that argument[1] would not refute CR, and vice versa
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn't refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
I don't particularly identify as an inductivist , and I don't think that the critrat version of inductivism, is what self identified inductivists believe in.
i said no, it’s a conclusion, and offered an explanation (which you’ve ignored)
Conclusion from what? The conclusion will be based on some deeper assumption.
you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I've also read some if the great man's works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn't suitable for science.
But of course the true believing critrats weren't convinced by Word of God.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don't have a disproof of induction that floats free of assumptions.
1 ↩︎
↑ comment by Periergo · 2020-08-29T20:22:50.486Z · LW(p) · GW(p)
I just discovered he keeps a wall of shame for people who left his forum:
http://curi.us/2215-list-of-fallible-ideas-evaders
Are you in this wall?
I am uncomfortable with this practice. I think I am banned from participating in curi's forum now anyway due to my comments here so it doesn't affect me personally but it is a little strange to have this list with people's personal information up.
↑ comment by curi · 2020-08-28T02:46:16.297Z · LW(p) · GW(p)
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn't suitable for science.
Source?
↑ comment by mako yass (MakoYass) · 2020-08-17T08:14:26.351Z · LW(p) · GW(p)
ps: I realize that's a lot of text to dump all at once, sorry about that. Maybe it's a good idea to focus on one thing?
It might be a good idea to divide comments up so that they can be voted on separately and so that the replies can be branch off under them too, but it's not important!
I'll reply to the rest tomorrow I think
Replies from: max-kaye↑ comment by Max Kaye (max-kaye) · 2020-08-22T17:30:13.160Z · LW(p) · GW(p)
I'm happy to do this. On the one hand I don't like that lots of replies creates more pressure to reply to everything, but I think if we'll probably be fine focusing on the stuff we find more important if we don't mind dropping some loose ends. If they become relevant we can come back to them.
↑ comment by Max Kaye (max-kaye) · 2020-08-16T20:05:28.699Z · LW(p) · GW(p)
I would be interested to know how Mirror Chamber strikes you though, I haven't tried to get non-bayesians to read it.
Will the Mirror Chamber [LW · GW] explain what "anthropic measure" (or the anthropic measure function) is?
I ended up clicking through to this [LW · GW] and I guess that the mirror chamber post is important but not sure if I should read something else first.
I started reading, and it's curious enough (and short enough) I'm willing to read the rest, but wanted to ask the above first.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-08-17T07:41:18.737Z · LW(p) · GW(p)
Aye, it's kind of a definition of it, a way of seeing what it would have to mean. I don't know if I could advocate any other definitions than the one outlined here.
comment by Periergo · 2020-09-05T05:19:35.349Z · LW(p) · GW(p)
Where can I learn more about Critical Rationalism?
(Not from curi and his group as I am not welcomed there and tbh after seeing this wall of shame: http://curi.us/2215-list-of-fallible-ideas-evaders I am glad I never posted any personal information)
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-09-05T10:39:03.147Z · LW(p) · GW(p)
A lot of people seemed to like Beginning of Infinity.