Posts
Comments
Saying that some things are right and others wrong is pretty standard round here. I don't think I'm breaking any rules. And I don't think you avoid making plonking statements yourself.
We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale
Possibly related:
The other day it was raining heavily. I chose to take an umbrella rather than shaking my fist at the sky. Shaking your fist at the sky seems pretty stupid, but people do analogous things all the time.
Complaining to your ingroup about your outgroup isn't going to change your outgroup. Complaining that you are misunderstood isn't going to make you understood. Changing the way you communicate might. You are not in control of how people interpret you, but you are in control of what you say.
It might be unfortunate that people have a hair-trigger tendency to interpret others as saying something dastardly, but, like the rain, it is too large and diffuse a phenomenon to actually do anything about.
Thinking in terms of virtue (or blame), and thinking in terms of fixing things , are very different. It's very tempting to sit down with your ingroup, and agree with them about the deplorability of the outgroup, who aren't even party to the conversation...as if that was achieving something. You can tell it is an attractor, because rational people are susceptible to it, too
That observation runs headlong into the problem, rather than solving it.
Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.
It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by "intuition". Philosophers talk about intution a lot because that is where arguments and trains of thought ground out...it is away of cutting to the chase. Most arguers and arguments are able to work out the consequences of basic intutitions correctly, so disagrements are likely to arise form differencs in basic intuitions themselves.
Philosophers therefore appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using inutioins when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.
Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.
Does the use of empiricism shortcut the need for intuitions, in the sense of unfounded foundations?
For one thing, epistemology in general needs foundational assumptions as much as anything else. Which is to say that epistemogy needs epistemology as much as anything else. -- to judge the validity of one system of epistemology, you need another one. There is no way of judging an epistemology starting from zero, from a complete blank. Since epistemology is inescapable, and since every epistemology has its basic assumptions, there are basic assumptions involved in empiricism.
Empiricism specifically has the problem of needing an ontological foundation. Philosophy illustrates this point with sceptical scenarios about how you are being systematically deceived by an evil genie. Scientific thinkers have closely parallel scenarios in which humans cannot be sure whether you are not in the Matrix or some other virtual reality. Either way, these hypotheses illustrate the point that the empiricists are running on an assumption that if you can see something, it is there.
Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5
Plus 6: There is a preferred basis.
First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI,
I think it's being used as an argument against beliefs paying rent.
MWI is testable insofar as QM itself is testable.
Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, and have not been, empirical.
But, importantly, collapse interpretations generally are empirically distinguishable from non-collapse interpretations.
No they are not, because of the meaning of the word "interpretation" but collapse theories, such as GRW, might be.
This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes [..] the answer matters a lot for our decision-making,
Which is one of the ways in which beliefs that don't pay rent do pay rent.
I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.
, a state is good when it engages our moral sensibilities s
Individually, or collectively?
We don't encode locks, but we do encode morality.
Individually or collectively?
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it
The goodness-to-you or the objective goodness?
if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relativism.
This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.
It's not clearly relativism and it's not clearly not-relativism. Those of us who are confused by it. are confused because we expect a metaethical theory to say something on the subject.
The opposite of Relative is Absolute or Objective. It isn't Intrinsic. You seem to be talking about something orthogonal to the absolute-relative axis.
No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.
And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.
That amounts to "I can make my theory work if I keep on adding epicycles".
can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..
[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Seconded.
That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain?
but I don't necessarily understand what it would mean for a different kind of mind.
I've already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions.
It's not so much some emergent things, for a uniform definiton of "emergent", as all things that come under a vriant definition of "emergent".
I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism
Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.
. I would illustrate this with Viliam's example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all.
Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say "reducible to its parts, their structure, and their interactions".
There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.
Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.
Eg:-
(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea".
So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don't see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don't feel pain. I don't see how that can be valid.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if "robot pain" is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
Those sound like fixable problems.
He's naive enough to reinvent LP. And since when was "coherent , therefore true" a precept of his epistemology?
You should do good things and not do bad things
You know that is not universally followed?
Not saying your epsitemology can do things it can;'t do.
Motte: We can prove things about reality.
Bailey: We can predict obervations.
Why would one care about correspondence to other maps?
It's worse than that , and they're not widely enough know
Usually, predictive accuracy is used as a proxy for correspondence to reality, because one cannot check map-territory correspondence by standing outside the map-territory relationship and observing (in)congreuence directly.
If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that
We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I'll need you to paraphrase.
If you want to know what "pain" means, sit on a thumbtack.
You can say "torture is wrong", but that has no implications about the physical world
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
That's just another word for the same thing? What does one do operationally?
I can also use"ftoy ljhbxd drgfjh"
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
If you have no arguments, then don't respond.
The implicit argument is that meaning/communication is not restricted to literal truth.
Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam's razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
My take is that the LP is the official doctrine, and the MWI is an unwitting exception.
Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory.
How do you detect that?
In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?
Well, you used it,.
I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad.
Its' bad because there's nothign inside the box. It's just a apriori argument.
That's harder to do when you have an explicit understanding.
Yes, that's one of the prime examples.
Do you think anyone can undertstand anything? (And are simplifications lies?)
Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.
and also don;'t want to talk about consciousness.
What?
You keep saying it s a broken concept.
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain?
That anything should feel like anything,
Proper as in proper scotsman?
Proper as not circular.
Circular as in
"Everything is made of matter. matter is what everything is made of." ?
I also claim that meta-rationalists claim to be at level 3, while they are not.
Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.
, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.
Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.
But the entire raison d'etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.
You still have relative inscrutability, because advanced maths isn't scrutable to everybody.
but claiming that something is inherently mysterious...
Nobody said that.
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is "the concept of preference is simpler than the concept of consciousness", w
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
"consciousness is generally not necessary to explain morality", which is more of an opinion.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;'t want to talk about consciousness.
Of course, now I'll say that I need "sensation" defined.
Of course, I'll need "defined" defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don't fit your apriori ontology. It's a form of question-begging.
. That's because I have never considered "Is X a concept" to be an interesting question.
You used the word , surely you meant something by it.
At that point proper definitions become necessary.
Proper as in proper scotsman?
What is stopping me from assigning them truth values?
The fact that you can't understand them.
You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument.
If you cant understand a statement as exerting the existence of something, it isn't meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.
I want you to decide whether "there is an invisible/undetectable unicorn in your room" is meaningless or false.
I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it.
This started when you said that "robots don't feel pain" does not follow from "we have no arguments suggesting that maybe 'robot pain' could be something measurable". I'm trying to understand why not
Because it needs premises along the lines of "what is not measurable is meaningless" and "what is meaningless is false", but you have not been able to argue for either (except by gerrymandered definitions).
Does "invisible unicorns do not exist" not follow from "invisible unicorns cannot be detected in any way?"
There's an important difference between stipulating something to be indetectable ... in any way, forever ... and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is "true" in some way that has nothing to do with reality.
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.
I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.
the more people have independently access to the phoenomenon, the more confidence I would give to its existence.
You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks' multiverse, or MWI, isn't supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.
If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.
You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.
You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists
Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say "yeah, that's a problem, let's put it in a box to be dealt with later" and then forget about it .
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
"The "controversy" was quite old in 1905. Maxwell's equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –"
Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply... I find very hard to believe in something that is both unscrutable and unnoticeable.
Inscrutable and unnoticeable to whom?
No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here.
Is that a fact or an opinion?
What is it exactly?
"highly unpleasant physical sensation caused by illness or injury."
Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
have you got an exact definition of "concept"?
Requiring extreme precision in all things tends to bite you.
. Can you define "meaningless" for me, as you understand it? In
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods
Where is this going? You can't stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
But this is not how it works. For certain definitions, meta-X is still a subset of X;
And for others, it isn't.
If a statement is true for all algorithms, it is also true for the "algorithm that tries several algorithms";
Theoretically, but there is no such algorithm.
Similarly, saying: "I don't have an epistemology; instead I have several epistemologies, and I use different ones in different situations" is a kind of epistemology.
But it's not a single algorithmic epistemology.
Also, some important details are swept under the rug, for example: How do you choose which epistemology is appropriate for which situation?
How do you do anything for whcih there isn't an algorithm? You use experience, intuition, and other system 1 stuff.
This is such a cheap trick
It isn't in all cases. There is a genuine problem in telling whether a claim of radically superior knowledge is genuine, You can;t round them all off to fraud.
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can't possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
in your case the definition Z does not exist, so making up a new one is the next best thing.
The ordinary definition for pain clearly does exist, if that is what you mean.
Yes, that's because your language is broken.
Prove it.
but you have brought in a bunch of different issues without explaining how they interrelate Which issues exactly
Meaningfulness, existence, etc.
Is this still about how you're uncomfortable saying that invisible unicorns don't exist?
Huh? It's perfectly good as a standalone stament , it's just that it doens't have much to do with meaning or measurabiltiy.
Does "'robot pain' is meaningless" follow from the [we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific] better?
Not really, because you haven't explained why meaning should depend on measurability.
So I assumed you understood that immeasurability is relevant here
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
Expressed in plain terms "robots do not feel pain" does not follow from "we do not know how to measure robot pain".
No, but it follows from "we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific".
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren't in your tribe doesn't work.
What does meta-rationality even imply, for the real world?
What does rationality imply? You can't actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
It's usually the case that the rank and file are a lot worse than the leaders.