Posts
Comments
It doesn't mean he doesn't really want to be a doctor
You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.
There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn't make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.
I think the problem isn't that your actions are inconsistent with your beliefs, it's that you have some false beliefs about yourself. You may believe that "death is bad", "charity is good", and even "I want to be a person who would give to charity instead of buying a beer". But it does not follow that you believe "giving to charity is more important to me than buying a beer".
This explanation is more desirable, because if actions don't follow from beliefs, then you have to explain what they follow from instead.
It seems you are no longer ruling out a science of other minds
No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me.
I've already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. "Robot pain" is perfectly meaningful to people with precise definitions of "pain". So, in the worst case, the "thing" remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can't make a detector if you don't already know what exactly you're trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It's also possible that the "thing" was meaningful to everyone to begin with. I don't know what "dubious detectability" is. My bar for meaningfulness isn't as high as you may think, though. "Robot pain" has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don't expect this sort of poking to cause problems that I couldn't patch, even in the worst case.
category error, like "sleeping idea"
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming.
I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is answered with "no". It's just that I honestly don't have such a restriction. I use the exact same explanation for the meaninglessness of both "fgdghffgfc" and "robot pain".
a contradiction, like "colourless green"
The question "is green colorless" has a perfectly good answer ("no, green is green"), unless you don't think that colors can have colors (in that case it's a category error too). But I'm nitpicking.
starts with your not knowing something, how to detect robot pain
Here you treat detectability as just some random property of a thing. I'm saying that if you don't know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can't possibly say that it exists.
My "unicorn ghost" example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I'd have to make up an argument why this isn't the unicorn I was talking about. "Robot pain" has no such flaws - it is devoid of any traces of meaningfulness.
That is a start, but we can't gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don't necessarily understand what it would mean for a different kind of mind.
and we don't know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word "objective" from my sentence yourself. Yes, I mean "objective morality". If "morality" means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you're not talking about "objective morality", you can no longer be confident that those rules make any sense. You can't say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if "robot pain" is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn't make it a real thing or an interesting philosophical question.
It's interesting that you didn't reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm aware of. If "invisible unicorns" have too much undetectability in the title, we can call them "ghost unicorns". But, of course, if you do detect some unicorns, I'll say that they aren't the unicorns I'm talking about and that you're just redefining this profound problem to suit you. Obviously this isn't a perfect analogue for your "robot pain", but I think it's alright.
So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?
I doubt that's a good thing. It hasn't been very productive so far.
Well, you used it,.
I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.
You keep saying it s a broken concept.
Yes. I consider that "talking about consciousness". What else is there to say about it?
That anything should feel like anything,
If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I'll need you to paraphrase.
Circular as in
"Everything is made of matter. matter is what everything is made of." ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say "torture is wrong", but that has no implications about the physical world. What happens if I torture someone?
Sure, and if X really is the best approximation of Y that Bob can understand, then again Alice is not dishonest. Although I'm not sure what "approximation" means exactly.
But there is also a case where Alice tells Bob that "X is true", not because X is somehow close to Y, but because, supposedly, X and Y both imply some Z. This is again a very different case. I think this is just pure and simple lying. That is, the vast majority of lies ever told fall into this category (for example, Z could be "you shouldn't jail me", X could be "I didn't kill anyone" and Y could be "sure, I killed someone, but I promise I won't do it again").
In general, the problem is that you didn't give specific examples, so I don't really know what case you're referring to.
Case 1: Alice tells Bob that "X is true", Bob then interprets this as "Y is true"
Case 2: Alice tells Bob that "X is true", because Bob would be too stupid to understand it if she said "Y is true". Now Bob believes that "X is true".
These two cases are very different. You spend the first half of your post in case 1, and then suddenly jump to case 2 for the other half.
<...> then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.
This is fair.
There are certain truths which literally cannot be spoken to some people.
But this is a completely different case. Lies told to stupid people are still lies, the stupid people don't understand the truth behind them, and you have communicated nothing. You could argue that those lies are somehow justified, but there is no parallel between lying to stupid people and things like "You're the best".
Well, I can imagine a post on SSC with 5 statements about the next week, where other users would reply with probabilities of each becoming true, and arguments for that. Then, after the week, you could count the scores and name the winners in the OP. It would probably get a positive reaction. Why not give it a try?
I'm not sure what the 5 statements should be though. I think it must be "next week" not "next year", because you can't enjoy a game if you've forgotten you're playing it. Also, for it to be a game, it has to be repeatable, but if you start predicting the most important events of the year, you'll run out very fast. On the other hand, weekly events tend to be unimportant random fluctuations. I think that's a big problem with the whole idea.
One possible solution could be to do experiments rather than predict natural events, i.e. "On day X I will try to do Y. Will it work?".
There are way too many "shoulds" in this post. If anyone can have fun predicting important events at all, then it would probably be people in this forum. Can we make something like this happen? Would we actually want to participate? I'm not sure that I do.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;'t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That "pain hurts"? Give concrete examples.
I'll need "defined" defined
In this case, "definition" of a category is text that can be used to tell which objects belong to that category and which don't. No, I don't see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don't fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It's actually perfectly fine not to have a word well defined. It's only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That's how language works. You first asked about definitions after I used the phrase "other poorly defined concepts". Here "concept" could mean "category".
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you'd define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
It's obvious - we need buzzfeed to create a "which celebrities will get divorced this year" quiz (with prizes?). There is no way people will be interested in predicting next year's GDP.
There is a common mistake in modeling humans, to think that they are simple. Assuming that "human chose a goal X" implies "human will take actions that optimally reach X" would be silly. Likewise assuming that humans can accurately observe their own internal state is silly. Humans have a series of flaws and limitations that obscure the simple abstractions of goal and belief. However, saying that goals and beliefs do not exist is a bit much. They are still useful in many cases and for many people.
By the way, it sounds a little like you're referring to so some particular set of beliefs. I think naming them explicitly would add clarity.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.
I'm trying to understand your definitions and how they're different from mine.
I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it.
I see that for you "meaningless" is a very narrow concept. But does that agree with your stated definition? In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form "X exists".
What happens if a robot pain detector is invented tomorrow?
I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
"Red giant" does not and cannot have precise boundaries
Again, you make a claim and then offer no arguments to support it. "Red giant" is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn't.
You started the language discussion, but I have to explain why we're continuing it? I continue, because I suspect that the reasoning errors you're making about chairs are similar to the errors you're making abut consciousness, and chairs are easier to talk about. But it's only a suspicion. Also, I continue, because you've made some ridiculous claims and I'm not going to ignore them.
you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
What the hell does "not questionable" mean?
Is that a fact or an opinion?
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
"You don't have to involve consciousness here" - has two meanings:
one is "the concept of preference is simpler than the concept of consciousness", which I would like to call a fact, although there are some problems with preference too.
another is "consciousness is generally not necessary to explain morality", which is more of an opinion.
"highly unpleasant physical sensation caused by illness or injury."
Of course, now I'll say that I need "sensation" defined.
have you got an exact definition of "concept"?
Requiring extreme precision in all things tends to bite you.
I'd say it's one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don't want to bother. That's because I have never considered "Is X a concept" to be an interesting question. And, frankly, I use the cord "concept" arbitrarily.
It's you who thinks that "Can X feel pain" is an interesting question. At that point proper definitions become necessary. I don't think I'm being extreme at all.
- Useless for communication.
A bit too vague. Can I clarify that as "Useless for communication, because it transfers no information"? Even though that's a bit too strict.
- Meaningless statements cannot have truth values assigned to them.
What is stopping me from assigning them truth values? I'm sure you meant, "meaningless statements cannot be proven or disproven". But "proof" is a problematic concept. You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument. Anyway, isn't (1.) enough?
Where is this going?
It's still entirely about meaning, measurability and existence. I want you to decide whether "there is an invisible/undetectable unicorn in your room" is meaningless or false.
This started when you said that "robots don't feel pain" does not follow from "we have no arguments suggesting that maybe 'robot pain' could be something measurable". I'm trying to understand why not and what it could follow from. Does "invisible unicorns do not exist" not follow from "invisible unicorns cannot be detected in any way?". Or maybe "invisible unicorns cannot be detected" does not follow from "we have no arguments suggesting that maybe 'invisible unicorns' could be something detectable"?
It only explains the "-less" suffix. It's fine as a dictionary definition, but that's obviously not what I asked for. I need you to explain "meaning" as well.
Google could easily add a module to Google Translate that would convert a statement into its opposite.
No, google could maybe add "not" before every "conscious", in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
What is "you" in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
I say I am conscious precisely because I am conscious.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn't make it true.
You are correct that "I forgot", in the sense that I don't know exactly what you are referring to
Well, that explains a lot. It's not exactly ancient history, and everything is properly quoted, so you really should know what I'm talking about. Yes, it's about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Why are there no determinate boundaries though? I'm saying that boundaries are unclear only if you haven't yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
All categories are vague, because they are generated by a process similar to factor analysis
There is nothing vague about the results of factor analysis.
It is false that the meanings are arbitrary, for the reasons I have said.
On this topic, last we seemed to have agreed that "arbitrary" classification means "without reasons related to the properties of the objects classified". I don't recall you ever giving any such reasons.
It is also false that there is some "absolute and natural concept of a chair," and I have never suggested that there is.
For example, you have said '"are tables also chairs" has a definite answer'. Note the word "definite". You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way "natural" is the opposite of "arbitrary".
All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things.
Yeah, I recall saying something like that myself. And the rest of your claims don't go well with this one.
you are the one who needs the "language 101" stuff
Well, you decided that I need it, then made some wild and unsupported claims.
You have been confusing the idea "this statement has a meaning" with "this statement is testable."
Yes, the two statements are largely equivalent. Oddly, I don't recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
Likewise, you have been confusing "this statement is vague" with "this statement is not testable."
I don't think I've done that. It's unfortunate that after this you spent so much time trying to to prove something I don't really disagree with. Why did you think that I'm confusing these things? Please quote.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side "red giants" and stars on the other side "white dwarfs". It would be pointless, but there is nothing stopping you. You say "cannot possibly" and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for "consciousness".
Nonetheless, this proves that testability is entirely separate from vagueness.
It's not a test if "no" is unobservable.
By acting like you actually want to understand what is being said
I think you already forgot how this particular part of the thread started. First I said that we had established that "X is false", then you disagreed, then I pointed out that I had asked "is X true?" and you had no direct answer. Here I'm only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they're conceding.
In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
What exactly do you mean by "vague"? The word "chair" refers to the category of chairs. Is the category itself "vague"?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don't actually know what the corner cases are or should be, but apparently that's not because people use words as they please, but because this shared absolute and natural concept of a chair is "vague", whatever that means.
We can talk more about what this has to do with consciousness when we get past the "language 101" stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that's what we're talking about.
The reason why I wrote the previous sentence is because I am conscious.
That's just paraphrasing your previous claim.
how do you know you don't just agree with me about this you whole discussion, and you are mechanically writing statements you don't agree with?
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it's opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify "myself" with this mind, rather than the one that's mute.
Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
I means "does not have a meaning."
I'm sure you can see how unhelpful this is.
Robot pain is of ethical concern because pain hurts.
No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here. You involve it, because you want to.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
Homeopathy is meaningful. God is meaningful only some of the time. But I didn't mean to imply that they are analogues. They're just other bad ideas that get way too much attention.
The ordinary definition for pain clearly does exist, if that is what you mean.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Meaningfulness, existence, etc.
It is evident that this is a major source of our disagreement. Can you define "meaningless" for me, as you understand it? In particular, how it applies to grammatically correct statements.
It's perfectly good as a standalone stament
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods.
I perform many human behaviors because I am conscious.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn't just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
I don't know where you think that was established.
Well, I asked you almost that exact question, you quoted it, and replied with something other than "yes". How was I supposed to interpret that?
So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair
So, if I find one chair-shaped rock, it's not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn't want to call a chair shaped rock a "chair". But you don't have to involve the intention of the maker for this.
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly?
No, still not from that.
Why not? Is this still about how you're uncomfortable saying that invisible unicorns don't exist? Does "'robot pain' is meaningless" follow from the same better?
If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that's not the same as influencing the classification.
And those things are true even given the same form
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn't.
But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Why is this a probable reason? You have one data point - yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
Ok, do you have any arguments to support that it is causal?
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to "parts moved by other parts"? It's not just that some arrangements of matter are conscious and others are not?
It also means not any other thing similar to consciousness, even if not exactly consciousness.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren't identical. But if there is some difference between "mere machine" and "unconscious machine", you have to point it out for me.
My reason is that we have no reason to think that a roomba is conscious.
Hypothetically, what could a reason to think that a robot is conscious look like?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
"Pain" is a word and humans aren't born knowing it. What does "no extra step" even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you're clearly just confused.
As I said, this is how people use the words.
What words? The word "causal"? I'm asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
It is causal, but not infallible.
Ok, do you have any arguments to support that claim?
That's your problem. Everyone else will still call it "the sun,"
That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.
You can make arguments for and against robot pain as well.
Arguments like what?
The word "mere" in that statement means "and not something else of the kind we are currently considering." When I made the statement, I meant that the roomba is not conscious
Oh, so "mere machine" just a pure synonym of "not conscious"? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don't know how to explain to you that this is bad.
The roomba just has each part of it moved by other parts
Are you implying that humans do not have parts that move other parts?
The mind does the work of recognizing similarity for us.
No, you misunderstood my question. I get that the mind recognizes similarity. I'm asking, how do you attach labels of "pain" and "pleasure" to the groups of similar experiences?
You're wrong.
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
I don't really know why you derive from this that all statements are meaningless. Maybe we disagree about what "meaningless" means? Wikipedia nicely explains that "A meaningless statement posits nothing of substance with which one could agree or disagree". It's easy for me to see that "undetectable purple unicorns exist" is a meaningless statement, and yet I have no problems with "it's raining outside".
How do you argue why "undetectable purple unicorns exist" is a meaningless statement? Maybe you think that it isn't, and that we should debate whether they really exist?
That's cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
but you would probably object that this is just saying it is not conscious
I would also object by saying that a human is also a "mere machine".
the the roomba's actions do not constitute a coherent whole
I have no idea what "coherent whole" means. Is roomba incoherent is some way?
you know quite well what I am talking about here
At times I honestly don't.
By recognizing that it is similar to the other feelings that I have called pain.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
Sweating is not an intense case of anything, so there wouldn't be much similarity.
I think you misunderstood me. Sweating is what people do when they're hot. I'm saying that pain isn't really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
This is an obvious fact about how these words are used and does not need additional support.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I'm asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you're supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole "does not need additional support" thing inspires no confidence.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason.
No, if tomorrow I found out that the "bright spot in the sky" is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I'm going to stop calling it "sun".
The stars outside event horizon
I hate bringing up modern physics, it has limited relevance here. Maybe they'll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I'd love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam's razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
That's not the problem.
Wow, so you agree with me here? Is it not a problem to you at all, or just not "the" problem?
Yes. "Meaningless" , "immeasurable", "unnecessary" and "non existent" all mean different things.
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement "invisible unicorns are purple" is meaningless. The words aren't all exactly the same, but that doesn't mean they aren't all appropriate.
Why did it take you so long too express it that way?
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word "pain". So I assumed you understood that immeasurability is relevant here. Did you then forget?
Expressed in plain terms "robots do not feel pain" does not follow from "we do not know how to measure robot pain".
No, but it follows from "we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific".
You keep saying various words are meaningless.
It's not that words are meaningless, it's that you sometimes apply them in stupid ways. "Bitter" is a fine word, until you start discussing the "bitterness of purple".
Consciousness is in the dictiionary, chariness isn't.
Are dictionary writers the ultimate arbiters of what is real? "Unicorn" is also in the dictionary, by the way.
Consciousness is a concept used by science, chairness isn't.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that's what projection looks like.
Consciousness is supported by empirical evidence, chairness isn't.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can't even come up with arguments why a silly concept I made up is flawed, maybe you shouldn't be so certain in the meaningfulness of other concepts.