Posts

Comments

Comment by Juno_Watt on Epistemic Viciousness · 2013-09-28T13:09:16.885Z · LW · GW

My Favorite Liar. Tell people that you're going to make X deliberately incorrect statements every training session and they've got to catch them.

I can think of only one example of someone who actually did this, and that was someone generally classed a a mystic.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-13T08:57:36.196Z · LW · GW

Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human.

Is that a fact? No, it's a matter of definition. It's scarecely credible you are unaware that a lot of people think the TT is critical to AGI.

The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category.

I can't see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.

Ok. NL is hard. Everyone knows that. But its got to be solved anyway.

Nope!

An AI you can't talk to has pretty limited usefulness, and it has pretty limited safety too, since you don;t even have the option of telling it to stop, or expaling to it why you don;t like what it is doing. Oh, and isn't EY assumign that an AGi will have NLP? After all, it is supposed to be able to talk its way out of the box.

It's one the SI can solve for itself.

It can figure out semantics for itslef. Values are a subsert of semantics...

No human being has ever created anything -- no system of laws, no government or organization, no human, no artifact -- that, if it were more powerful, would qualify as Friendly. I

Wherer do you get this stuff from? Modern societies, with their complex legal and security systems are much less violent than ancient socieites. To take ut one example.

All or nearly all humans, if they were more powerful, would qualify as Unfriendly.

Gee. Then I guess they don't have an architecutre with a basic drive to be friendly.

'Smiles' and 'statements of approval' are not adequate roadmarks, because those are stimuli the SI can seize control of in unhumanistic ways to pump its reward buttons.

Why don't humans do that?

No, it isn't.

Uh-huh. MIRI has settled that centuries-aold quesiton for once and all has it?

And this is a non sequitur.

It can't be a non-sequitur, since it is not an arguemnt but a statement of fact.

Nothing else in your post calls orthogonality into question.

So? It wasn't relevant anywhere else.

Comment by Juno_Watt on General purpose intelligence: arguing the Orthogonality thesis · 2013-09-12T16:36:31.361Z · LW · GW

Let G1="Figure out the right goal to have"

Comment by Juno_Watt on General purpose intelligence: arguing the Orthogonality thesis · 2013-09-12T16:30:16.462Z · LW · GW

If an agent has goal G1 and sufficient introspective access to know its own goal, how would avoiding arbirtrariness in its goals help it achieve goal G1 better than keeping goal G1 as its goal?

Avoiding arbitrariness is useful to epistemic rationality and therefore to instrumental rationality. If an AI has rationality as a goal it will avoid arbitrariness, whether or not that assists with G1.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T16:07:56.042Z · LW · GW

And you are confusing self-improving AIs with conventional programmes.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T16:06:29.899Z · LW · GW

Those are only 'mistakes' if you value human intentions. A grammatical error is only an error because we value the specific rules of grammar we do; it's not the same sort of thing as a false belief (though it may stem from, or result in, false beliefs).

You will see a grammatical error as a mistake if you value grammar in general, or if you value being right in general.

A self-improving AI needs a goal. A goal of self-improvement alone would work. A goal of getting things right in general would work too, and be much safer, as it would include getting our intentions right as a sub-goal.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T15:17:33.764Z · LW · GW

GAI is a program. It always does what it's programmed to do. That's the problem—a program that was written incorrectly will generally never do what it was intended to do.

So self-correcting software is impossible. Is self improving software possible?

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T15:13:26.499Z · LW · GW

You've still not given any reason for the future software to care about "what you mean" over all those other calculation either.

Software that cares what you mean will be selected for by market forces.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T15:11:47.561Z · LW · GW

Present day software may not have got far with regard to the evaluative side of doing what you want, but the XiXiDu's point seems to be that it is getting better at the semantic side. Who was it who said the value problem is part of the semantic problem?

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T14:14:48.313Z · LW · GW

A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions' real meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.

That problem has got to be solved somehow at some stage, because something that couldn't pass a Turing Test is no AGI.

But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.

  1. You have to actually code the seed AI to understand what we mean. Y

Why is that a problem? Is anyone suggesting AGI can be had for free?

  1. The Problem of Meaning-in-General may really be ten thousand heterogeneous problems, especially if 'semantic value' isn't a natural kind. There may not be a single simple algorithm that inputs any old brain-state and outputs what, if anything, it 'means'; it may instead be that different types of content are encoded very differently.

Ok. NL is hard. Everyone knows that. But its got to be solved anyway.

3... On the face of it, programming an AI to fully understand 'Be Friendly!' seems at least as difficult as just programming Friendliness into it, but with an added layer of indirection.

Yeah, but it's got to be done anyway.

[more of the same snipped]

It's clear that building stable preferences out of B or C would create a Friendly AI.

Yeah. But it wouldn't be an AGI or an SI if it couldn't pass a TT.

The genie — if it bothers to even consider the question — should be able to understand what you mean by 'I wish for my values to be fulfilled.' Indeed, it should understand your meaning better than you do. But superintelligence only implies that the genie's map can compass your true values. Superintelligence doesn't imply that the genie's utility function has terminal values pinned to your True Values, or to the True Meaning of your commands.

The issue of whether the SI's UF contains a set of human values is irrelevant. In a Loosemore architecture, an AI needs to understand and follow the directive "be friendly to humans", and those are all the goals it needs-- to understand, and to follow;

When you write the seed's utility function, you, the programmer, don't understand everything about the nature of human value or meaning. That imperfect understanding remains the causal basis of the fully-grown superintelligence's actions, long after it's become smart enough to fully understand our values.

The UF only needs to contain "understand English, and obey this directive". You don't have to code semantics into the UF. You do of course, have to code it in somewhere,

Instead, we have to give it criteria we think are good indicators of Friendliness, so it'll know what to self-modify toward

A problem which has been solved over and over by humans. Humans don't need to be loaded apriori with what makes other humans happy, they only need to know general indicators, like smiles and statements of approval.

Yes, the UFAI will be able to solve Friendliness Theory. But if we haven't already solved it on our own power, we can't pinpoint Friendliness in advance, out of the space of utility functions. And if we can't pinpoint it with enough detail to draw a road map to it and it alone, we can't program the AI to care about conforming itself with that particular idiosyncratic algorithm.

Why would that be necessary? In the Loosemore architecture, the AGI has the goals of understanding English and obeying the Be Friendly directive. It eventually gets a detailed, extensional, understanding of Friendliness from pursuing those goals, Why would it need to be preloaded with a detailed, extensional unpacking of friendliness? It could fail in understanding English, of course. But there is no reason to think it is unlikely to fail at understanding "friendliness" specifically, and its competence can be tested as you go along.

And if we can't pinpoint it with enough detail to draw a road map to it and it alone, we can't program the AI to care about conforming itself with that particular idiosyncratic algorithm.

I don't see the problem. In the Loosemore architecture, the AGI will care about obeying "be friendly", and it will arrive at the detailed expansion, the idiosyncracies, of "friendly" as part of its other goal to understand English. It cares about being friendly, and it knows the detailed expansion of friendliness, so where's the problem?

Yes, the UFAI will be able to self-modify to become Friendly, if it so wishes. But if there is no seed of Friendliness already at the heart of the AI's decision criteria, no argument or discovery will spontaneously change its heart.

Says who? It has the high level directive, and another directive to understand the directive. It's been Friendly in principle all along, it just needs to fill in the details.

Unless we ourselves figure out how to program the AI to terminally value its programmers' True Intentions,

Then we do need to figure out how to program the AI to terminally value its programmers' True Intentions. That is hardly a fatal objection. Did you think the Loosemore architecture was one that bootstraps itself without any basic goals?

And if we do discover the specific lines of code that will get an AI to perfectly care about its programmer's True Intentions, such that it reliably self-modifies to better fit them — well, then that will just mean that we've solved Friendliness Theory.

No. The goal to understand English is not the same as a goal to be friendly in every way, it is more constrained.

Solving Friendliness, in the MIRI sense, means preloading a detailed expansion of "friendly". That is not what is happening in the Loosemore architecture. So it is not equivalent to solving the same problem.

The clever hack that makes further Friendliness research unnecessary is Friendliness.

Nope.

Intelligence on its own does not imply Friendliness.

That is an open question.

It's true that a sufficiently advanced superintelligence should be able to acquire both abilities. But we don't have them both, and a pre-FOOM self-improving AGI ('seed') need not have both. Being able to program good programmers is all that's required for an intelligence explosion; but being a good programmer doesn't imply that one is a superlative moral psychologist or moral philosopher.

Then hurrah for the Loosemore architecture, which doesn't require humans to"solve" friendliness in the MIRI sense.

Comment by Juno_Watt on The genie knows, but doesn't care · 2013-09-12T06:53:17.698Z · LW · GW

Some folks on this site have accidentally bought unintentional snake oil in The Big Hoo Hah That Shall not Be Mentioned. Only an intelligent person could have bought that particular puppy,

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-09-08T10:20:08.117Z · LW · GW

1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().

2) It's not practical at the monent, and wouldn't answer the theoretical questions.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-31T08:45:40.184Z · LW · GW

my intuition that [Mary] would not understand qualia disappears.

For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.

Comment by Juno_Watt on Rationality Quotes September 2011 · 2013-08-31T08:07:57.264Z · LW · GW

Isn't that disproved by paid-for networks, like HBO? And what about non-US broadcasters like the BBC?

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-28T01:06:12.146Z · LW · GW

I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.

Mary isn't a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.

I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of "experiential knowledge" found in the unconscious "black box" parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.

Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-28T00:51:22.207Z · LW · GW

Why is she generating a memory? How is she generatign a memory?

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-28T00:44:28.284Z · LW · GW

So she's bound and gagged, with no ability to use her knowledge?

If by "using her knowledge" you mean performing neurosurgery in herself, I have to repeat that that is a cheat.Otherwise, I ha e to point put that knowledge of, eg. phontosynthesis, doesn't cause photosynthesis.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-27T08:59:32.254Z · LW · GW

She could then generate such memories in her own brain,

Mary is a super-sceintist in tersm of intelligence and memory, but doesn't have special abilities to rewire her own cortex. Internally gerneating Red is a cheat, like pricking her thumb to observe the blood.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-27T08:38:20.540Z · LW · GW

If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#

I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.

Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence

Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.

Are you proposing that it's impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery?

I am noting thar equivlant talk must be included in functional equivalence.

Why not just build a regular qualia engine, by copying the meat-brain processes 1:1?

You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-27T01:11:03.122Z · LW · GW

Feelings are made out of firing neurons, which are in turn made out of atoms."

A claim that some X is made of some Y is not showing how X's are made of Y's. Can you explain why red is produced and not soemthing other.

I don't get the appeal of dualism.

I wasn't selling dualism, was noting that ESR's account is not particualrly phsycialist as well as being not particularly explanatory,

P-zombies and inverted spectrums deserve similar ridicule.

I find the Mary argument more convincing.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-26T22:31:17.864Z · LW · GW

That isn't a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into smaller component parts. In fact, it doens;t do much more than say subjectivity exists, and occurs in sync with brain states. As such, it is compatible with dualism.

Reading Wikipedia's entry on qualia, it seems to me that most of the arguments that qualia can't be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way.

You mean p-zombie arguments?

But because qualia are a property of our brain's interaction with external stimuli, rather than a property of our bodies, the idea that you could change someone's qualia without changing their brain or the external world fails to pass our nonsense detector.

Whatever,tThat doesn;t actuall provide an explanation of qualia.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-26T21:44:46.733Z · LW · GW

It has always seemed to me that qualia exist, and that they can fully be explained by reductionism and physicalism

Can you point me to such an explanation??

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-26T21:13:10.764Z · LW · GW

Reductionism says there is some thing existing X which is composed of, undestandable in terms of, and ultimately identical to some other existing thing Y. ELiminativism says X doesn't exist. Heat has been reduced, phlogiston has been eliminated.

Comment by Juno_Watt on Reality is weirdly normal · 2013-08-26T21:09:58.522Z · LW · GW

I agree with most of this, although I am not sure that the way strawberries taste to me is a posit.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T15:32:38.909Z · LW · GW

If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won't be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access--remember or think about--the change, if that is part of the preserved functionality, But if your experience changes, you can't fail to experience it).

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T15:13:11.936Z · LW · GW

Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless.

Doesn't follow, Qualia aren't causing Charles's qualia-talk, but that doens't mean thery aren't causing mine. Kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.

The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T14:39:32.838Z · LW · GW

Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.

We can tell that we have qualia, and our won consciousnessn is the ntarual starting point.

"Qualia" can be defined by giving examples: the way anchiovies taste, the way tomatos look, etc.

You are makiing heavy weather of the indefinability of some aspects of consciousness, but the flipside of that is that we all experience out won consciousness. It is not a mystery to us. So we can substitute "inner ostension" for abstract definition.

There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness.

OTOH, we don't have examples of non-biological consc.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T13:07:58.692Z · LW · GW

I don't see anything very new here.

Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."

Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an important sense "the same person" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?"

How does Albert know that Charles;s consciousness hasn't changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won't report the change because of the functional equivalence of the change.

Charles: "Introspection isn't perfect. Lots of stuff goes on inside my brain that I don't notice."

If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change. "Introspection" is being used ambiguously here, between what is noticed and what is reported.

Albert: "Yeah, and I can detect the switch flipping! You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you'll talk just the same way afterward."

Albert's comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like "I see red". Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T12:14:35.187Z · LW · GW

If we want to understand how consciousness works in humans, we have to accou t for qualia as part of it. Having an undertanding of human consc. is the best practical basis for deciding whether other entitieies have consc. OTOH, starting by trying to decide which entities have consc. is unlikely to lead anywhere.

The biological claim can be ruled out if it is incoherent, but not if it for being unproven, since the funciontal/computational alternative is also unproven.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T01:24:29.225Z · LW · GW

Why? That doesn't argue any point relevant to this discussion.

Comment by Juno_Watt on The Generalized Anti-Zombie Principle · 2013-08-26T01:11:04.729Z · LW · GW

"qualia" labels part of the explanandum, not the explanation.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-26T01:02:46.470Z · LW · GW

The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.

A functional duplicate will talk the same way as whomever it is a duplicate of.

A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),

A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the "reason" or, ultimate cause, could be quite different, since the WBE is physically different.

since it hasn't been deliberately programmed to fake consciousness-talk.

It has been programmed to be a functional duplicate of a specific individual.,

Or, something extremely unlikely has happened.

Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.

Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk,

Actually it isn't, for reasons that are widely misunderstood: kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T23:23:22.466Z · LW · GW

The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T22:49:30.328Z · LW · GW

Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause

For some value of "cause". If you are interested in which synaptic signals cause which reports, then you have guaranteed that the cause will be the same. However, I think what we are interested in is whether reports of experience and self-awareness are caused by experience and self-awareness

We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some

However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.

Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there will only be minor differences at that level, Since you don't care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won't be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!

You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can't report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T20:32:53.170Z · LW · GW

I don't see the relevance. I was trying to argue that bioloigical claim could be read as more specific than the functional one.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T20:20:17.896Z · LW · GW

Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?

IOW, why assign "top" probability to the synaptic level, when there are further levels.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T20:19:45.995Z · LW · GW

This comment:

EY to Kawoomba:

This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.

Appeals to contradict this comment:

EY to Juno_Watt

Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.

Comment by Juno_Watt on Biases of Intuitive and Logical Thinkers · 2013-08-25T20:03:20.567Z · LW · GW

If chatting with cute women is utilitous for you, your decision was rational. Rationality doesn't mean you have to restrict yourself to "official" payoffs.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T19:46:16.413Z · LW · GW

Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T19:24:11.312Z · LW · GW

What does "like" mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T15:08:39.598Z · LW · GW

Yep. They weren't an exhaustive definition of consc, and weren't said to be. No-one needs to infer the subject matter from 1) and 2), since it was already given.

Tell me, are you like this all the time? You might make a good roommate for dr Dr Sheldon Cooper.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T14:47:38.140Z · LW · GW

I'm just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.

I have already stated those aspects of the meaning of "consciousness" necessary for my argument to go through. Why should I explain more?

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T14:22:16.939Z · LW · GW

I am saying it is not conceptually possible to have something that precisely mimics a biological entity without being biological.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T14:19:45.515Z · LW · GW

The claim then rules out computationalism.

Comment by Juno_Watt on Why Are Individual IQ Differences OK? · 2013-08-25T14:17:43.488Z · LW · GW

I mean data about individuals like resumes and qualifications That racial-group info correlates with important things is unimportant, unless it correlates significantly more than individual data. However, the reverse is the case.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-25T14:11:53.753Z · LW · GW

If you use the word "consciousness", you ought to know what you mean by it.

The same applies to you. Any English speaker can attach a meaning to "consciousness". That doesn't imply the possession of deep metaphysical insight. I don't know what dark matter "is" either. I don't need to fully explain what consc. "is", since ..

"I don't think the argument requires consc. to be anything more than:

1) something that is there or not (not a matter of interpretation or convention).

2) something that is not entirely inferable from behaviour."

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-24T19:06:22.664Z · LW · GW

What makes you think I know?

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-24T18:49:51.379Z · LW · GW

This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.

Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?

It might be the case that an adequate replacement -- not a full replacment -- could be non-biological. But it might not.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-24T18:44:36.376Z · LW · GW

Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness.

That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.

Comment by Juno_Watt on How sure are you that brain emulations would be conscious? · 2013-08-24T18:11:03.933Z · LW · GW

I wasn't arguing that differences in implementation are not important. For some purposes they are very important.

I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.

We should not discuss the question of what can be conscious, however, without first tabooing "consciousness" as I requested.

I don't think they argument requires consc. to be anything more than:

1) something that is there or not (not a matter of interpetation or convention).

2) something that is not entirely inferable from behaviour.