Posts

Comments

Comment by TheAncientGeek on Does the Higgs-boson exist? · 2019-05-27T15:09:09.321Z · LW · GW

Saying that some things are right and others wrong is pretty standard round here. I don't think I'm breaking any rules. And I don't think you avoid making plonking statements yourself.

Comment by TheAncientGeek on Privacy · 2019-03-17T10:32:49.541Z · LW · GW

We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale

Comment by TheAncientGeek on An optimistic explanation of the outrage epidemic · 2018-07-15T18:55:07.084Z · LW · GW

Possibly related:

The other day it was raining heavily. I chose to take an umbrella rather than shaking my fist at the sky. Shaking your fist at the sky seems pretty stupid, but people do analogous things all the time.

Complaining to your ingroup about your outgroup isn't going to change your outgroup. Complaining that you are misunderstood isn't going to make you understood. Changing the way you communicate might. You are not in control of how people interpret you, but you are in control of what you say.

It might be unfortunate that people have a hair-trigger tendency to interpret others as saying something dastardly, but, like the rain, it is too large and diffuse a phenomenon to actually do anything about.

Thinking in terms of virtue (or blame), and thinking in terms of fixing things , are very different. It's very tempting to sit down with your ingroup, and agree with them about the deplorability of the outgroup, who aren't even party to the conversation...as if that was achieving something. You can tell it is an attractor, because rational people are susceptible to it, too

Comment by TheAncientGeek on Reductionism · 2017-11-10T10:31:22.548Z · LW · GW

That observation runs headlong into the problem, rather than solving it.

Comment by TheAncientGeek on Your intuitions are not magic · 2017-10-03T11:37:19.564Z · LW · GW

Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.

It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by "intuition". Philosophers talk about intution a lot because that is where arguments and trains of thought ground out...it is away of cutting to the chase. Most arguers and arguments are able to work out the consequences of basic intutitions correctly, so disagrements are likely to arise form differencs in basic intuitions themselves.

Philosophers therefore appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using inutioins when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.

Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.

Does the use of empiricism shortcut the need for intuitions, in the sense of unfounded foundations?

For one thing, epistemology in general needs foundational assumptions as much as anything else. Which is to say that epistemogy needs epistemology as much as anything else. -- to judge the validity of one system of epistemology, you need another one. There is no way of judging an epistemology starting from zero, from a complete blank. Since epistemology is inescapable, and since every epistemology has its basic assumptions, there are basic assumptions involved in empiricism.

Empiricism specifically has the problem of needing an ontological foundation. Philosophy illustrates this point with sceptical scenarios about how you are being systematically deceived by an evil genie. Scientific thinkers have closely parallel scenarios in which humans cannot be sure whether you are not in the Matrix or some other virtual reality. Either way, these hypotheses illustrate the point that the empiricists are running on an assumption that if you can see something, it is there.

Comment by TheAncientGeek on Leaving LessWrong for a more rational life · 2017-09-15T14:11:02.023Z · LW · GW

Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5

Plus 6: There is a preferred basis.

Comment by TheAncientGeek on Leaving LessWrong for a more rational life · 2017-09-15T14:04:14.124Z · LW · GW

First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI,

I think it's being used as an argument against beliefs paying rent.

MWI is testable insofar as QM itself is testable.

Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, and have not been, empirical.

But, importantly, collapse interpretations generally are empirically distinguishable from non-collapse interpretations.

No they are not, because of the meaning of the word "interpretation" but collapse theories, such as GRW, might be.

Comment by TheAncientGeek on Leaving LessWrong for a more rational life · 2017-09-15T13:48:34.205Z · LW · GW

This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes [..] the answer matters a lot for our decision-making,

Which is one of the ways in which beliefs that don't pay rent do pay rent.

Comment by TheAncientGeek on Intrinsic properties and Eliezer's metaethics · 2017-09-15T13:33:26.648Z · LW · GW

I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.

Comment by TheAncientGeek on Intrinsic properties and Eliezer's metaethics · 2017-09-15T13:24:57.302Z · LW · GW

, a state is good when it engages our moral sensibilities s

Individually, or collectively?

We don't encode locks, but we do encode morality.

Individually or collectively?

Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it

The goodness-to-you or the objective goodness?

if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relativism.

This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.

It's not clearly relativism and it's not clearly not-relativism. Those of us who are confused by it. are confused because we expect a metaethical theory to say something on the subject.

The opposite of Relative is Absolute or Objective. It isn't Intrinsic. You seem to be talking about something orthogonal to the absolute-relative axis.

Comment by TheAncientGeek on New business opportunities due to self-driving cars · 2017-09-13T12:12:26.392Z · LW · GW

No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.

And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.

Comment by TheAncientGeek on Intrinsic properties and Eliezer's metaethics · 2017-09-05T17:39:50.389Z · LW · GW

That amounts to "I can make my theory work if I keep on adding epicycles".

Comment by TheAncientGeek on Intrinsic properties and Eliezer's metaethics · 2017-09-05T13:13:06.303Z · LW · GW

can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

Comment by TheAncientGeek on On the importance of Less Wrong, or another single conversational locus · 2017-08-30T11:54:48.417Z · LW · GW

Seconded.

Comment by TheAncientGeek on The Reality of Emergence · 2017-08-24T12:53:22.906Z · LW · GW

That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-24T11:56:19.275Z · LW · GW

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain?

but I don't necessarily understand what it would mean for a different kind of mind.

I've already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.

Consider a scenario where two people are discussing something of dubious detectability.

Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.

Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?

Comment by TheAncientGeek on The Reality of Emergence · 2017-08-23T14:19:45.100Z · LW · GW

I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions.

It's not so much some emergent things, for a uniform definiton of "emergent", as all things that come under a vriant definition of "emergent".

I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism

Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.

. I would illustrate this with Viliam's example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all.

Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say "reducible to its parts, their structure, and their interactions".

Comment by TheAncientGeek on The Reality of Emergence · 2017-08-23T11:28:14.662Z · LW · GW

There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.

Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.

Eg:-

(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-23T11:17:29.700Z · LW · GW

I asked you before to propose a meaningless statement of your own.

And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea".

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

Very low finite rather than infinitessimal or zero.

I don't see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don't feel pain. I don't see how that can be valid.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-18T14:26:48.456Z · LW · GW

We can derive that model by looking at brain states and asking the brains which states are similar to which.

That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity.

They only need to know about robot pain if "robot pain" is a phrase that describes something.

As i have previously pointed out, you cannot assume meaninglessness as a default.

morality, which has many of the same problems as consciousness, and is even less defensible.

Morality or objective morality? They are different.

Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.

Comment by TheAncientGeek on No Safe Defense, Not Even Science · 2017-08-18T12:48:40.975Z · LW · GW

Those sound like fixable problems.

Comment by TheAncientGeek on Inscrutable Ideas · 2017-08-17T18:04:05.233Z · LW · GW

He's naive enough to reinvent LP. And since when was "coherent , therefore true" a precept of his epistemology?

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-17T17:59:11.692Z · LW · GW

You should do good things and not do bad things

You know that is not universally followed?

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-17T16:14:50.702Z · LW · GW

Not saying your epsitemology can do things it can;'t do.

Motte: We can prove things about reality.

Bailey: We can predict obervations.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-17T09:59:43.965Z · LW · GW

Why would one care about correspondence to other maps?

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-16T18:46:40.192Z · LW · GW

It's worse than that , and they're not widely enough know

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-16T16:15:30.142Z · LW · GW

Usually, predictive accuracy is used as a proxy for correspondence to reality, because one cannot check map-territory correspondence by standing outside the map-territory relationship and observing (in)congreuence directly.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-16T15:20:21.428Z · LW · GW

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

If it refers to something else, then I'll need you to paraphrase.

If you want to know what "pain" means, sit on a thumbtack.

You can say "torture is wrong", but that has no implications about the physical world

That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-16T15:08:05.463Z · LW · GW

That's just another word for the same thing? What does one do operationally?

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-16T15:06:47.708Z · LW · GW

I can also use"ftoy ljhbxd drgfjh"

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

If you have no arguments, then don't respond.

The implicit argument is that meaning/communication is not restricted to literal truth.

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?

What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam's razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.

Comment by TheAncientGeek on Inscrutable Ideas · 2017-08-16T10:25:54.606Z · LW · GW

My take is that the LP is the official doctrine, and the MWI is an unwitting exception.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-16T07:01:12.378Z · LW · GW

Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory.

How do you detect that?

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-15T14:28:36.337Z · LW · GW

In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?

Well, you used it,.

I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad.

Its' bad because there's nothign inside the box. It's just a apriori argument.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-15T13:47:15.061Z · LW · GW

That's harder to do when you have an explicit understanding.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-15T13:46:37.921Z · LW · GW

Yes, that's one of the prime examples.

Comment by TheAncientGeek on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-15T13:45:50.669Z · LW · GW

Do you think anyone can undertstand anything? (And are simplifications lies?)

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-15T13:32:29.074Z · LW · GW

Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?

Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.

and also don;'t want to talk about consciousness.

What?

You keep saying it s a broken concept.

A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.

What facts am I failing to explain?

That anything should feel like anything,

Proper as in proper scotsman?

Proper as not circular.

Circular as in

"Everything is made of matter. matter is what everything is made of." ?

Comment by TheAncientGeek on Inscrutable Ideas · 2017-08-12T12:21:53.817Z · LW · GW

I also claim that meta-rationalists claim to be at level 3, while they are not.

Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.

, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.

Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.

But the entire raison d'etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.

You still have relative inscrutability, because advanced maths isn't scrutable to everybody.

but claiming that something is inherently mysterious...

Nobody said that.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-10T14:33:02.706Z · LW · GW

Obviously, anything can be of ethical concert, if you really want it to be

Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.

"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.

You seem to be hinting that the only problem is going against preferences. That theory is contentious.

is "the concept of preference is simpler than the concept of consciousness", w

The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.

"consciousness is generally not necessary to explain morality", which is more of an opinion.

That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;'t want to talk about consciousness.

Of course, now I'll say that I need "sensation" defined.

Of course, I'll need "defined" defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don't fit your apriori ontology. It's a form of question-begging.

. That's because I have never considered "Is X a concept" to be an interesting question.

You used the word , surely you meant something by it.

At that point proper definitions become necessary.

Proper as in proper scotsman?

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-09T14:41:23.005Z · LW · GW

What is stopping me from assigning them truth values?

The fact that you can't understand them.

You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument.

If you cant understand a statement as exerting the existence of something, it isn't meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.

I want you to decide whether "there is an invisible/undetectable unicorn in your room" is meaningless or false.

I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it.

This started when you said that "robots don't feel pain" does not follow from "we have no arguments suggesting that maybe 'robot pain' could be something measurable". I'm trying to understand why not

Because it needs premises along the lines of "what is not measurable is meaningless" and "what is meaningless is false", but you have not been able to argue for either (except by gerrymandered definitions).

Does "invisible unicorns do not exist" not follow from "invisible unicorns cannot be detected in any way?"

There's an important difference between stipulating something to be indetectable ... in any way, forever ... and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is "true" in some way that has nothing to do with reality.

Comment by TheAncientGeek on Inscrutable Ideas · 2017-08-09T08:02:14.727Z · LW · GW

Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.

I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.

the more people have independently access to the phoenomenon, the more confidence I would give to its existence.

You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks' multiverse, or MWI, isn't supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.

If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.

You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.

You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.

Comment by TheAncientGeek on Inscrutable Ideas · 2017-08-08T08:13:33.122Z · LW · GW

Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists

Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say "yeah, that's a problem, let's put it in a box to be dealt with later" and then forget about it .

Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.

"The "controversy" was quite old in 1905. Maxwell's equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –"

https://physics.stackexchange.com/questions/133366/what-problems-with-electromagnetism-led-einstein-to-the-special-theory-of-relati

Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply... I find very hard to believe in something that is both unscrutable and unnoticeable.

Inscrutable and unnoticeable to whom?

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-07T16:04:04.866Z · LW · GW

No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here.

Is that a fact or an opinion?

What is it exactly?

"highly unpleasant physical sensation caused by illness or injury."

Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.

have you got an exact definition of "concept"?

Requiring extreme precision in all things tends to bite you.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-07T14:47:44.274Z · LW · GW

. Can you define "meaningless" for me, as you understand it? In

  1. Useless for communication.

  2. Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).

So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods

Where is this going? You can't stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.

Comment by TheAncientGeek on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-08-02T14:01:15.292Z · LW · GW

But this is not how it works. For certain definitions, meta-X is still a subset of X;

And for others, it isn't.

If a statement is true for all algorithms, it is also true for the "algorithm that tries several algorithms";

Theoretically, but there is no such algorithm.

Similarly, saying: "I don't have an epistemology; instead I have several epistemologies, and I use different ones in different situations" is a kind of epistemology.

But it's not a single algorithmic epistemology.

Also, some important details are swept under the rug, for example: How do you choose which epistemology is appropriate for which situation?

How do you do anything for whcih there isn't an algorithm? You use experience, intuition, and other system 1 stuff.

This is such a cheap trick

It isn't in all cases. There is a genuine problem in telling whether a claim of radically superior knowledge is genuine, You can;t round them all off to fraud.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-01T15:18:31.193Z · LW · GW

I am, at times, talking about alternative definition

Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.

Meaninglessness is not the default.

Well, it should be

That can't possible work, as entirelyuseless has explained.

Sure, in a similar way that people discussing god or homeopathy bothers me.

God and homeopathy are meaningful, which is why people are able to mount arguments against them,

in your case the definition Z does not exist, so making up a new one is the next best thing.

The ordinary definition for pain clearly does exist, if that is what you mean.

Yes, that's because your language is broken.

Prove it.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-08-01T15:04:36.315Z · LW · GW

but you have brought in a bunch of different issues without explaining how they interrelate Which issues exactly

Meaningfulness, existence, etc.

Is this still about how you're uncomfortable saying that invisible unicorns don't exist?

Huh? It's perfectly good as a standalone stament , it's just that it doens't have much to do with meaning or measurabiltiy.

Does "'robot pain' is meaningless" follow from the [we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific] better?

Not really, because you haven't explained why meaning should depend on measurability.

Comment by TheAncientGeek on Steelmanning the Chinese Room Argument · 2017-07-31T17:35:37.597Z · LW · GW

So I assumed you understood that immeasurability is relevant here

I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you

Expressed in plain terms "robots do not feel pain" does not follow from "we do not know how to measure robot pain".

No, but it follows from "we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific".

No, still not from that.

You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.

Comment by TheAncientGeek on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-31T17:17:49.327Z · LW · GW

You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren't in your tribe doesn't work.

What does meta-rationality even imply, for the real world?

What does rationality imply? You can't actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...

Comment by TheAncientGeek on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-28T13:11:57.253Z · LW · GW

It's usually the case that the rank and file are a lot worse than the leaders.