Posts

Comments

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-06T01:48:43.392Z · LW · GW

"at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

No - what I'm pointing out is that the question "what are the ethical implications for turing machines" is the same question as "what are the ethical implications for human beings" in that case."

Yeah, look, I'm not stupid. If someone assumes A and then actually bothers to write out the modus ponens A->B (when A->B is an obvious statement) so therefore B, and then wants to point out, 'hey look, I didn't assume B, I proved it!', that really doesn't mean that they proved anything deep. They still just assumed the conclusion they want since they assumed a statement that trivially implies their desired conclusion. But I'll bow out now too...I only followed a link from a different forum, and indeed my fears were confirmed that this is a group of people who don't have anything meaningful or rational to say about certain concepts (I mean, if you don't realize even that certain things are in principle open to physical test!---and you drew an analogy to creationism vs evolution without realizing that evolution had and has many positive pieces of observable, physical evidence in its favor while your position has at present at best very minimal observable, tangible evidence in its favor (certain recent experiments in neuroscience can be charitably interpreted in favor of your argument, but on their own they are certainly not enough)).

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-06T01:04:16.232Z · LW · GW

thanks. my point exactly.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-06T01:02:17.826Z · LW · GW

You should be aware that in many cases, the sensible way to proceed is to be aware of the limits of your knowledge. Since the website preaches rationality, it's worth not assigning probabilities of 0% or 100% to things which you really don't know to be true or false. (btw, I didn't say 1) is the right answer, I think it's reasonable, but I think it's 3) )

And sometimes you do have to wait for an answer. For a lesson from math, consider that Fermat had flat out no hope of proving his "last theorem", and it required a couple hundred years of apparently unrelated developments to get there....one could easily give a few hundred examples of that sort of thing in any hard science which has a long enough history.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-06T00:59:04.613Z · LW · GW

You wrote: "This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines."

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

"Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines."

having assumed that A is true, it is easy to prove that A is true. You haven't given an argument.

"To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine."

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-06T00:55:21.789Z · LW · GW

sorry, not familiar with that. can it be summarized?

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-05T22:19:38.540Z · LW · GW

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-05T22:14:11.887Z · LW · GW

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-05T22:06:51.173Z · LW · GW

No, I was not trying to think along those lines. I must say, I worried in advance that discussing philosophy with people here would be fruitless, but I was lured over by a link, and it seems worse than I feared. In case it isn't clear, I'm perfectly aware what a Turing machine is; incidentally, while I'm not a computer scientist, I am a professional mathematical physicist with a strong interest in computation, so I'm not sitting around saying "OH NOES" while being ignorant of the terms I'm using. I'm trying to highlight one aspect of an issue that appears in many cases: if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines, what are the implications if we do any of the obvious things? (replaying, turning off, etc...) I haven't yet seen any reasonable answer, other than 1) this is too hard for us to work out, but someday perhaps we will understand it (the original answer, and I think a good one in its acknowledgment of ignorance, always a valid answer and a good guide that someone might have thought about things) and 2) some pointless and wrong mocking (your answer, and I think a bad one). edit to add: forgot, of course, to put my current guess as to most likely answer, 3) that consciousness isn't possible for Turing machines.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-05T20:35:49.029Z · LW · GW

This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don't think there is a sure way to tell which one is right.

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).

Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.

Comment by matt1 on Rationality Quotes: April 2011 · 2011-04-05T18:31:38.045Z · LW · GW

Of course, my original comment had nothing to do with god. It had to do with "souls", for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more---basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don't talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?