Can't Unbirth a Child

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-28T17:00:00.000Z · LW · GW · Legacy · 96 comments

Contents

96 comments

Followup toNonsentient Optimizers

Why would you want to avoid creating a sentient AI?  "Several reasons," I said.  "Picking the simplest to explain first—I'm not ready to be a father."

So here is the strongest reason:

You can't unbirth a child.

I asked Robin Hanson what he would do with unlimited power.  "Think very very carefully about what to do next," Robin said.  "Most likely the first task is who to get advice from.  And then I listen to that advice."

Good advice, I suppose, if a little meta.  On a similarly meta level, then, I recall two excellent advices for wielding too much power:

  1. Do less; don't do everything that seems like a good idea, but only what you must do.
  2. Avoid doing things you can't undo.

Imagine that you knew the secrets of subjectivity and could create sentient AIs.

Suppose that you did create a sentient AI.

Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.

Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.

Suppose that these AIs were emotional as well as sentient, capable of being happy or sad.  And that these AIs were capable, indeed, of finding fulfillment in our world.

And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—

—these trillions of people also cared, very strongly, about making giant cheesecakes.

Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.

Consider, I beg you, the full and awful depths of our moral dilemma.

Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—

—would we be right to do so?  They'd be people, no less than us.

We, the original humans, would have become a numerically tiny minority.  Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?

Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

I can tell you my advice on how to resolve this horrible moral dilemma:  Don't create trillions of new people that care about cheesecake.

Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions.

I've heard proposals to "uplift chimpanzees" by trying to mix in human genes to create "humanzees", and, leaving off all the other reasons why this proposal sends me screaming off into the night:

Imagine that the humanzees end up as people, but rather dull and stupid people.  They have social emotions, the alpha's desire for status; but they don't have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts.  They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human's abstract intelligence. 

When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time.  Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...

CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.

I've lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness.  You want to know how high a challenge has to be before I finally give up and flee screaming into the night?  There it stands.

You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence.  Otherwise: egg, meet chicken, chicken, meet egg.

If you create a sentient superintelligence—

It's not just the problem of creating one damaged soul.  It's the problem of creating a really big citizen.  What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being?  What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that's enough by itself to outweigh our species?

Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—

—delivers a gigantic kick to the world, which cannot be undone.

And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.

What you make nonsentient, can always be made sentient later; but you can't just unbirth a child.

Do less.  Fear the non-undoable.  It's sometimes poor advice in general, but very important advice when you're working with an undersized decision process having an oversized impact.  What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue.  But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.

A nonsentient Friendly superintelligence is a more colorless act.

So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.

 

 

Part of The Fun Theory Sequence

Next post: "Amputation of Destiny"

Previous post: "Nonsentient Optimizers"

96 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Tim_M · 2008-12-28T16:58:24.000Z · LW(p) · GW(p)

This is all predicated on the assumption that "sentience" automatically results in moral rights. I would say that moral rights are fundamentally based on empathy, which is subjective -- we give other people moral rights in order to secure those rights for ourselves.

I think the vast majority of the population would have no problem with "apartheid" or "genocide" of sentient AIs or chimps. As a secular humanist, I would reluctantly agree with them. Like it or not, at some level my morality boils down to an emotional attachment to humanity, and transferring that attachment to non-humans would be a big leap.

There are obvious parallels to the evolution of racial attitudes, and maybe someday "humanist" will join "racist" as a pejorative. If that happens, so be it, but I think that change is a long ways away.

comment by nazgulnarsil3 · 2008-12-28T17:08:29.000Z · LW(p) · GW(p)

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T14:43:54.741Z · LW(p) · GW(p)

If the universe is open, we won't ever run out of space! The infinite future and infinite space raise plenty of other problems of their own, but I think it's interesting that they actually do solve this one.

comment by anon19 · 2008-12-28T17:34:49.000Z · LW(p) · GW(p)

Tim:

Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think) find that a human brain simulation is morally significant, even though there is much that is not clear about the consequences. The same should be true of a consciousness that isn't in fact a simulation of a human, but of course determining what is and what is not conscious is the hard part.

It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-28T17:34:57.000Z · LW(p) · GW(p)

Some people take "satisficing, instead of maximizing" a little too far.

comment by JamesAndrix · 2008-12-28T17:50:13.000Z · LW(p) · GW(p)

Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist.

comment by JamesAndrix · 2008-12-28T17:55:09.000Z · LW(p) · GW(p)

Also, I think it is at least as possible that on moral reflection we would consider all mammals/animals/life as equal citizens. So we may already be outvoted.

comment by roko3 · 2008-12-28T17:55:26.000Z · LW(p) · GW(p)

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

comment by nazgulnarsil3 · 2008-12-28T18:33:45.000Z · LW(p) · GW(p)

I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?

Replies from: pnrjulius, DanielLC, CAE_Jones, MugaSofer
comment by pnrjulius · 2012-04-20T14:44:57.784Z · LW(p) · GW(p)

It does seem intuitively right to say that killing something already existing is worse than not creating it in the first place.

(Though, formalizing this intuition is murder. Literally.)

Replies from: MugaSofer, wedrifid
comment by MugaSofer · 2013-01-15T10:25:59.803Z · LW(p) · GW(p)

formalizing this intuition is murder

... it is?

comment by wedrifid · 2013-01-15T16:38:02.486Z · LW(p) · GW(p)

Though, formalizing this intuition is murder. Literally.

No, murder requires that you kill someone (there are extra moral judgements necessary but the killing is rather unambiguous.)

Replies from: Brilliand
comment by Brilliand · 2015-08-28T17:26:02.292Z · LW(p) · GW(p)

I read that quote as saying "if you formalize this intuition, you wind up with the definition of murder". While not entirely true, that statement does meet the "kill" requirement.

comment by DanielLC · 2013-01-15T05:30:46.972Z · LW(p) · GW(p)

A superintelligent AI doesn't have truly unlimited power. It can't even violate the laws of physics, let alone morality. If your moral system says that death is inherently bad, then undoing the creation of a child is bad.

comment by CAE_Jones · 2013-01-15T06:27:13.580Z · LW(p) · GW(p)

I often think about a rewound reality, where the only difference is the data in my brain... and the biggest problem I have with this is all the people that are born after the time I'd go back to that I don't want to unmake.

Of course, my attention span is terrible, so I never follow one of these long enough or thorough enough to simulate how I'd try to avert such issues... then chaos theory would screw it up in spite of all that. The point is that I concur.

comment by MugaSofer · 2013-01-15T10:24:56.925Z · LW(p) · GW(p)

I'm pretty sure that "rewinding" is different to choosing now not to create lives.

comment by Lightwave · 2008-12-28T18:50:31.000Z · LW(p) · GW(p)

So if we created a brain emulation that wakes up one morning (in a simulated environment), lives happily for a day, and then goes to bed after which the emulation is shut down, would that be a morally bad thing to do? Is it wrong? After all, living one day of happiness surely beats non-existence?

comment by luzr · 2008-12-28T19:20:29.000Z · LW(p) · GW(p)

"these trillions of people also cared, very strongly, about making giant cheesecakes."

Uh oh. IMO, that is fallacy. You introduce quite reasonable scenario, then inject some nonsense, without any logic or explanation, to make it look bad.

You should better explain when, on the way from single sentient AI to voting rights fot trillions, cheesecakes came into play. Is it like all sentients being are automatically programmed to like creating big cheescakes? Or anything equally bizzarre?

Subtract cheescakes and your scenario is quite OK with me, including 0,1% of galaxy for humans and 99.9% for AIs. 0.1% of galaxy is about 200 millions of stars...

BTW, it is most likely that without sentient AI, there will be no human (or human originated) presence outside solar system anyway.

Well, so far, my understanding is that your suggestion is to create nonsentient utility maximizer programmed to stop research in certain areas (especially research in creating sentient AI, right?). Thanks, I believe I have a better idea.

comment by anon19 · 2008-12-28T20:01:42.000Z · LW(p) · GW(p)

luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.

The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.

nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not really talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn't necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.

roko: It's true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It's true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it's invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or ...) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of "morally significant" seems to coincide with sentience.

But what if there is no morality that is invariant under introspection?

comment by nazgulnarsil3 · 2008-12-28T20:18:39.000Z · LW(p) · GW(p)

Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.

you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about? I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.

comment by luzr · 2008-12-28T21:04:04.000Z · LW(p) · GW(p)

anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."

I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.

"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."

Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake" in unlikely.

Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T14:49:35.869Z · LW(p) · GW(p)

Indeed, when we substitute for "cheesecake" the likely things that a superintelligent AI might value, the problem becomes a whole lot less obvious.

"We want to create a unified superintelligence that encompasses the full computational power of the universe." "We want to create the maximum possible number of sentient intelligences the universe can sustain." "We want to create a being of perfect happiness, the maximally hedonic sentient." "We want to eliminate the concepts of 'selfishness' and 'hierarchy' in favor of a transcendental egalitarian anarchy."

Would humans resist these goals? Yes, because they probably entail getting rid of us puny flesh-bags. But are they worth doing? I don't know... it kinda seems like they might be.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-01T01:02:15.495Z · LW(p) · GW(p)

We want to create a unified superintelligence that encompasses the full computational power of the universe." "We want to create the maximum possible number of sentient intelligences the universe can sustain." "We want to create a being of perfect happiness, the maximally hedonic sentient." "We want to eliminate the concepts of 'selfishness' and 'hierarchy' in favor of a transcendental egalitarian anarchy.

It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can't be incrementally satisfied.

This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren't all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people.

The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that's the case I can't complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn't cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don't know, would that one AI be willing to settle for encompassing 1/4 of the computational power of the universe with a superintelligence?).

That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don't create creatures who will be totally unsatisfied unless they're allowed to eat the universe.

comment by anon19 · 2008-12-28T21:04:54.000Z · LW(p) · GW(p)

I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?

comment by [deleted] · 2008-12-28T21:09:00.000Z · LW(p) · GW(p)

luzr: Using anything but "cheesecake" as a placeholder adds a bias to the whole story, in that case.

comment by anon19 · 2008-12-28T21:17:04.000Z · LW(p) · GW(p)

luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or otherwise change our moralities, when sufficient introspection forces us to do so. For instance, consider how our morality has changed to reject outright slavery; after sufficient introspection, it does not seem consistent with our other values.

comment by nazgulnarsil3 · 2008-12-28T21:38:24.000Z · LW(p) · GW(p)

what effect would it have on the point

if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you're a utilitarian this is the delicate point. at what point are two sentients with a certain happiness level worth one sentient with a higher happiness level? Does a starving man steal bread to feed his family? This turns into: Should we legitimize stealing from the baker to feed as many poor as we can?

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T14:59:50.301Z · LW(p) · GW(p)

No, the theft problem is much easier than the aggregate problem.

If the only thing in our power to change is the one man's behavior, we probably would allow the man to steal. It's worse to let his family die. But if we start trying to let everyone steal whenever they can't afford things, this would collapse our economy and soon mean there weren't enough goods to even steal. So if it's within our power to change the whole system, we wouldn't let the man steal---instead we would eliminate poverty so that no one ever has to steal. This is obviously the optimal long-run large-scale decision, and the trick is really getting there from here (the goal is essentially undisputed).

The aggregate problem is a whole lot harder, because the goals themselves are in dispute. Which world is better, a world of 1,000 ultimately happy people, or a world of 1 billion people whose lives are just barely worth living?

comment by Robin_Hanson2 · 2008-12-28T21:41:54.000Z · LW(p) · GW(p)

Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we are going, or even knows much about how any particular track switch would change where we end up. They then suggest that we please please slow all this change down so we can stop and think. But that doesn't seem a remotely likely scenario to me.

comment by nazgulnarsil3 · 2008-12-28T22:01:10.000Z · LW(p) · GW(p)

the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that this sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?

comment by frelkins · 2008-12-28T22:14:29.000Z · LW(p) · GW(p)

"please please slow all this change down"

No way no how. Bring the change on, baby. Bring.It.On.

For those who complain about being on your toes all the time, I say take ballet.

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T15:02:14.007Z · LW(p) · GW(p)

Also, think of all the millions of children you're killing because we didn't cure their diseases fast enough.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-05-13T04:35:07.272Z · LW(p) · GW(p)

That's true, but shouldn't we also give weight to the billions of people who might die if we screw up and create some sort dangerous AI? Or, in a less exotic scenario, we end up fighting a war with some kind of world-destroying weapon we invent? We've already had some close calls in that department. So far the amount of benefits the accelerating changes have given us outweigh the harms, but we've been really lucky.

Or, more pertinent to the OP, if the lives that would be lost if we create a bunch of AIs that we don't consider morally significant, erase them, and then later realize we were wrong to consider them not morally significant?

comment by Will_Pearson · 2008-12-28T23:19:45.000Z · LW(p) · GW(p)

I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.

comment by TGGP4 · 2008-12-29T01:52:29.000Z · LW(p) · GW(p)

Democracy is a dumb idea. I vote for aristocracy/apartheid. Considering the disaster of the former Rhodesia, currently Zimbabwe, and the growing similarities in South Africa, the actual historical apartheid is starting to look pretty good. So I agree with Tim M, except I'm not a secular humanist.

comment by Grant · 2008-12-29T02:33:55.000Z · LW(p) · GW(p)

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

Replies from: DanielLC
comment by DanielLC · 2013-01-15T05:51:21.993Z · LW(p) · GW(p)

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind?

It's not going to make you more powerful than it if it's going to limit its ability to make you more intelligent in the future. It will make sure it's intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.

comment by Phil_Goetz2 · 2008-12-29T02:52:31.000Z · LW(p) · GW(p)

Anon: "The notion of "morally significant" seems to coincide with sentience."

Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."

Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.

James: "Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist."

CEV is not a magic "do what I mean" incantation. Even supposing the idea were worked out, before the first AI is built, you probably don't have a mechanism to implement it.

anon: "It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves."

Something is missing from that sentence. Whatever you meant, let's not rule out creating new species. We should, eventually.

Eliezer: Creating new sentient species is frightening. But is creating new non-sentient species less frightening? Any new species you create may out-compete the old and become the dominant lifeform. It would be the big lose to create a non-sentient species that replaced sentient life.

comment by Nick_Tarleton · 2008-12-29T03:14:44.000Z · LW(p) · GW(p)
Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

And it doesn't consider it significant that this one hack that boosts IQ by 100 points makes us miserable/vegetables/sadists/schizophrenic/take your pick. Or think that it should have asked before turning the rest of the solar system into computronium. And, of course, it won't hold with the existence of anything intelligent enough to potentially turn it off, and so on....

The Hidden Complexity of Wishes

comment by Grant · 2008-12-29T03:34:48.000Z · LW(p) · GW(p)

Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.

If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.

comment by Nick_Tarleton · 2008-12-29T03:46:25.000Z · LW(p) · GW(p)

Offering someone a pill that'll make them a schizophrenic genius, without telling them about the schizophrenia part, doesn't even fall under most (any?) ordinary definitions of "coercion". (Which vary enough to have whole opposing political systems be built on them – if I'm dependent on employment to eat, am I working under coercion?)

An AI improving itself has a clear definition of what not to mess with – its current goal system.

comment by Grant · 2008-12-29T04:22:33.000Z · LW(p) · GW(p)

Nick,

Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.

comment by michael_vassar3 · 2008-12-30T01:13:15.000Z · LW(p) · GW(p)

Not telling people about harmful side-effects that they don't ask about wasn't considered fraud when all the food companies failed to inform the public about Trans Fats, as far as I can tell. At the least, their management don't seem to be going to jail over it. Not even the cigarette executives are generally concerned about prison time.

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T15:03:53.929Z · LW(p) · GW(p)

That's because of the legal principle of ex post facto, not because it isn't coercion.

comment by Robin_Hanson2 · 2008-12-30T01:54:21.000Z · LW(p) · GW(p)

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

Replies from: pnrjulius
comment by pnrjulius · 2012-04-20T15:04:23.087Z · LW(p) · GW(p)

Yeah, do we really want to give over control to a super-powerful intelligence that DOESN'T have feelings?

Replies from: JulianMorrison, TheOtherDave
comment by JulianMorrison · 2012-04-20T15:14:43.993Z · LW(p) · GW(p)

Er, yes? Feelings are evolution's way of playing carrot-and-stick with the brain. You really do not want to have an AI that needs spanking, whether it's you or a emotion module that does it: it's apt to delete the spanker and award itself infinite cake.

comment by TheOtherDave · 2012-04-20T15:15:32.299Z · LW(p) · GW(p)

Can you summarize your reasons, stipulating that we really want to give over control to a super-powerful intelligence at all, for why we should want it to have feelings?

comment by Vladimir_Nesov · 2008-12-30T18:29:25.000Z · LW(p) · GW(p)

Implementing an algorithm is simpler than optimizing for morality: you have all kinds of equivalence at your disposal, you can undo anything. If the first AI doesn't itself contribute any moral content, you (or it) is free to renormalize it in any way, recreating it the way it was supposed to be built, as opposed to the way it was actually built, experimenting with its implementation, emulating its runs, and so on and so forth. If, on the other hand, its structure is morally significant, rebuilding might no longer be an option, and a final result may be worse than what it'd be possible to create starting from a morally blank slate (for the AI implementation). Morality is not time-reversible, and making a moral mistake at the point that is to guide the dynamic of moral growth for the future may be much more costly than it looks on the surface. Giving most of the universe for "paperclipping" with it being morally wrong to not give it away to the new mind is a real possibility, so we'd better avoid taking responsibility before understanding how reversible or irreversible the decision will turn out to be.

comment by Philip_Goetz · 2008-12-30T23:14:53.000Z · LW(p) · GW(p)

Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.

If we suppose that intelligences have a power-law distribution, and the sysop is the one at the top, we'll find that it uses up something around 20% to 50% of the accessible universe's computronium.

That would be a natural (as in "expected in nature") distribution. But since the sysop needs to stay in charge, it will probably destroy any other AIs who reach the "second tier" of intelligence. So it will more likely have something like 70% - 90% of the universe's computronium.

Also, in this post-human world, there aren't large penalties for individuality. That is: In today's world, you can't add up 3 chimpanzee brains and get human-level intelligence. In the world of AIs, you probably can. This means that, to stay on top, the sysop will always need to reserve a majority of the universe's computronium for itself. Otherwise, the rest of the universe can gang up on it.

So creating a non-sentient sysop means cutting the amount of sentient life you can support by at least half.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-05-13T04:43:27.323Z · LW(p) · GW(p)

Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.

Not necessarily. It depends on what the sysop does with all that computing power once it's in charge. Sentience is one of the basic goods, but having whatever sentient creatures exist live excellent lives is another one. If the sysop uses 60% of the computing power to run itself and 40% to run sentient creatures, it could still be a net win if the sysop spends most of its time finding new ways to make those other creature's lives as wonderful as possible.

Look at it another way, the organic matter currently being used to make your clothes, food, home, etc could probably also be used to make more humans out of. But it's probably better to use it to improve your life then to create a bunch of cold, naked, hungry people.

comment by HalFinney · 2008-12-30T23:33:58.000Z · LW(p) · GW(p)

I am uncomfortable with the notion that there is an absolute measure of whether (or to what degree) a particular entity is morally significant. It seems to touch on Eliezer's discarded idea of Absolute Morality. Is it an intrinsic property of reality whether a given entity has moral significance? If so, what other moral questions can be resolved Absolutely?

Isn't it possible, or even likely, that there is no Absolute measure of moral significance? If we accept that other moral questions do not have Absolute answers, why should this question be different?

comment by Nick_Tarleton · 2008-12-31T00:28:41.000Z · LW(p) · GW(p)

Hal: Within a given 'moral reference frame', there is an absolute measure of significance.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-31T00:39:00.000Z · LW(p) · GW(p)

Hal, while many of our moral categories do seem to be torturable by borderline cases, if we get to pick the system design, we can try to avoid a borderline case.

comment by Endergen · 2008-12-31T19:00:17.000Z · LW(p) · GW(p)

"Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions."

That sounds like self-referential logic to me. What could possibly understand the implications of a new intelligence, except for a test run of the whole or part of that new intelligence.

I really like your site and your writings as it always seems to enrich my own thoughts on similar subjects. But I do find that I disagree with you on one point. I would just start writing the software to test out your theories, as the proof is in the pudding. Discussing logic and processes usual English is just so long winded and fuzzy. How can you know that anything is logically sound unless you just put it all together and see how it all lines up.

I'm sure you do write many formulas and test programs. I just mean in general I feel like your site would be enriched by many demos of your concepts say implemented in Javascript so that people could just run them as they would be embedded in your blog.

comment by [deleted] · 2011-12-15T12:43:57.607Z · LW(p) · GW(p)

You can't unbirth a child.

The revealed human preferences speak otherwise. Subsets of humans have decided that you can't do that, but I'm not at all certain they really are something humans would converge to if they where wiser, smarter and less crazy.

But I think I agree with the basic premise, we don't know, so lets not do something that might leave a bad taste in our mouths for eternity. To rephrase that:

I understood this blog post as: Trillions of cheesecake lovers, we care about change the utility pay off we can get in our universe. Us denying them their desire for cheesecakes probably brings us considerable disutilitiy. Which means the most rational thing to do at that point is probably to tile a fraction (perhaps the vast majority of the universe) with cheesecake since that is the highest pay-off available. If we never created Cheesake lovers we cared about the pay-offs available to us would be probably larger.

Note: Can someone please tell me if I'm getting this right?

Replies from: wedrifid, JoachimSchipper
comment by wedrifid · 2011-12-15T13:10:14.588Z · LW(p) · GW(p)

You can't unbirth a child.

The revealed human preferences speak otherwise.

Isn't it a question of physics? Unbirthing seems impossible. You can kill and or destroy children if you want but you can't unbirth.

Replies from: None
comment by [deleted] · 2011-12-15T13:32:30.990Z · LW(p) · GW(p)

Isn't it a question of physics? Unbirthing seems impossible. You can kill and or destroy children if you want but you can't unbirth.

I don't see this as a question of physics. Though we may be arguing about words here.

  • A > B > C > Child

"You can't unbirth a child" is just how we say its ok to undo A, B or C but not the child. It is physically impossible to "unbirth" or " undo" B and C or A in exactly the same material sense as the child. We don't see that as carrying the moral weight of killing the child so we don't say you can't unbrith B or A. In any case child is just a place-holder for "sentient" which seems to be a place-holder for "something we care about".

  • A > B > C > Child
  • A > B > C > D > Something we care about
  • A > B > Person

Can describe the same exact physical process. By speaking of revealed human preferences I wanted to it be put into consideration that humans have historically used the first, the second and the third description for the same thing. We may in the future use heuristics that are ok with us painlessly erasing the cheescake lovers, just as at one point we decided that abortion is ok, or as we at one point decided that infanticide is not.

But the risk that while we think we wouldn't care, we would actually end up caring may be enough to swamp the gain. Reliably "non-sentient" AI is probably the better option.

comment by JoachimSchipper · 2011-12-15T16:36:01.895Z · LW(p) · GW(p)

I think that's mostly correct, but Eliezer means something stronger than "considerable disutility" when he says "right" (e.g. self-modifying to like killing people and then killing people is not right; The meaning of right.)

comment by [deleted] · 2012-06-16T20:54:20.652Z · LW(p) · GW(p)

So, the thing I primarily got from this article was a gigantic wiggling confusion...

What is "sentience"? I have been thinking this over for about three days and I still got neither a satisfying reduction to the subjective side of cognitive algorithms nor to anything resembling a mathematical principle.

If I took an EM and filed and refined the components, replaced the approximative neurons by hard applied maths, and compared the result to a run-of-the-mill bayes AI, would I have a module left over?

What exactly makes both me and EY and presumably many others think sentience is a thing and distinguish "sentient" and "non-sentient"?

If I made a FAI, wouldn't it have huge moral weight compared to me, just from considering how much good it could do compared to me? What makes me specially "sentient" and a {predictive world model, morally right utility function, magic mind code} "non-sentient"? Why do I distinguish them?

Replies from: Mitchell_Porter, Bugmaster
comment by Mitchell_Porter · 2012-06-17T00:52:30.136Z · LW(p) · GW(p)

I suggest that you try to read Heidegger's Being and Time. You will probably abandon the book in disgust; but that is how far away from your current concepts you will have to reach, in order to answer your final questions, just on the epistemological level. The natural sciences construct their ontology by focusing entirely on the objective pole of thought and experience, and the subjective pole won't reappear by itself, just from thinking about algorithms and mathematics.

Replies from: None
comment by [deleted] · 2012-06-17T10:28:11.634Z · LW(p) · GW(p)

I suggest that you try to read Heidegger's Being and Time. You will probably abandon the book in disgust; but that is how far away from your current concepts you will have to reach, in order to answer your final questions, just on the epistemological level.

I'll add it to my reading list.

the subjective pole won't reappear by itself, just from thinking about algorithms and mathematics.

How do you know that? Like, genuine question, this smells like a cached thought.

Replies from: Mitchell_Porter, thomblake
comment by Mitchell_Porter · 2012-06-18T16:32:05.607Z · LW(p) · GW(p)

How do you know that? [...] this smells like a cached thought.

It's certainly a conclusion I reached long ago and became comfortable with long ago. But you should understand that this is perhaps the major intellectual issue of my life. It's about twenty years since I started thinking about alternatives to the standard crypto-dualist theories of mind that are advanced by materialists, computational neoplatonists, and so on. I call these theories crypto-dualist because they are expounded as if reality is "nothing but atoms" or "nothing but computation", yet they also assert the existence of conscious experience, yet they don't really reduce it to atoms or to computation. They assert a correlation between two things, and call it an identity; thus, crypto-dualism, secret dualism.

It's easy to see that it won't work once you can diagnose what's going on. Once you accept that, for example, colors, thoughts, etc, are actually something different from anything you can make out of points in space or out of sets of numbers, it's easy to see when someone is making exactly this mistake, and the steps in their argument where "a miracle occurs", or the property dualism slips past, unnoticed.

But to be outspoken about the issue, and boldly assert that, no, if you go that way, you must become a dualist, even though you're going that way precisely in order to avoid dualism ... it helps to have an inkling of what a genuine solution to the problem would look like; and I have that thanks of long readings in phenomenology (which can equip you with the concepts and language to think about consciousness as it actually presents itself, and without importing metaphors and assumptions from natural science or computer science), and a knowledge of mathematical physics which tells me how unfamiliar the fundamental ontology can look, and finally some acquaintance with the long tradition of speculation about the role of quantum physics in biology and the brain - a line of thought which gets more robust with each decade, even as the concrete early forms of the idea get falsified. This all combines to make it conceivable that the unfamiliar ontologies implied by phenomenology can be realized in nature, so long as the ontologically arcane side of physics can be involved, and this in turn requires that the physics of thought is more than just distributed classical computation.

It shouldn't be necessary to have thought of all that in order to notice, e.g., that arrangements of colorless particles in space do not produce color by themselves, so a belief that the experience of color has that for an ontological foundation implies emergent properties, i.e. property dualism; but apparently it helps to have the other sort of idea on call, in order to notice the problem with the ordinary forms of materialism. And in any case, I don't think about structures of phenomenal intentionality being algebraic objects in the hilbert space of neuro-microtubular electrons (or whatever) just to open the minds of other people; that's also me simply trying to figure out what the truth actually is.

Returning to your question, "how do I know": it's not hard to know, or it's not hard to see; you just spent three days "seeing" a similar problem yourself. What's hard is to rebut all the various defenses of ordinary reductionism that can be mounted. Any aspect of consciousness which presents a barrier to reduction is liable to be redefined in terms which apriori make it reducible to a standard ontology (but at this point one is no longer talking about the original entity). For example, "sensation" will be redefined to mean a type of neural activity, so we need a neologism like "qualia" to talk about sensation in the original sense of the word.

Another twist is that the ontologically correct account of how we make ontological judgements about phenomenal entities - how we know, for example, that a color is not a sound, or that time is not a number - will only be possible when we have a correct account of mental ontology in general. There is therefore an appearance of circularity, of never being able to get started, in trying to provide the epistemic justification for this insistence that standard reductionist ontology is simply not up to the challenge of explaining the mind (in its conscious or "sentient" aspect). But this is more a matter of convincing other people than it is of convincing oneself.

Replies from: None
comment by [deleted] · 2012-06-18T17:23:22.806Z · LW(p) · GW(p)

I cannot make sense of your comment. Will you please just state your thesis simply and without discourse?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-18T18:57:27.504Z · LW(p) · GW(p)

My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science. The attempt to describe reality entirely in terms of the existing concepts of those disciplines is necessarily incomplete, partly because it's all about X causing Y but not about what X and Y are. Consciousness gives us a glimpse of the "true nature" of at least one thing - itself, i.e. our own minds - and therefore a glimpse of the true ontological depths. But rationalists and materialists who define their rationalism and materialism as "explaining everything in terms of the existing concepts" create intellectual barriers within themselves to the sort of progress which could come from this reflective, phenomenological approach.

I'm not just talking about arcane metaphysical "aspects" of consciousness. I'm talking about something as basic as color. Color does not exist in standard physical ontology - "colors" are supposed to be wavelengths, but a length is not a color; this is an example of the redefining of concepts that I mentioned in the previous long comment. This is actually an enormous clue about the nature of reality - color exists, it's part of a conscious state, therefore, if the brain is the conscious thing, then part of the brain must be where the color is. But it sounds too weird, so people settle for the usual paradoxical crypto-dualism: the material world consists of colorless particles, but the experience of color is in the brain somewhere, but that doesn't mean that anything in the brain is actually "colored". This is a paradox, but it allows people to preserve the sense that they understand reality.

You asked for a simple exposition but that's just not easy. Certainly color ought to be a very simple example: it's there in reality, it's not there in physics. But let me try to express my thoughts about the actual nature of color... it's an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state... it's almost better just to say nothing, until I've written some treatise which explains all my terms and their motivations.

I only made my original comment because you spontaneously expressed perplexity at the nature of "sentience", and I wanted to warn you against the false solutions that most rationalist-materialists will adopt, under the self-generated pressure to explain everything using just the narrow ontological concepts they already have.

Replies from: TheOtherDave, None, None, Bugmaster
comment by TheOtherDave · 2012-06-18T19:41:02.401Z · LW(p) · GW(p)

OK, so, I perceive certain things are red, and I perceive certain groups of things as numbering four.

On your account, I perceive the "redness" by virtue of an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state. OK.

On your account, do I perceive the "fourness" the same way? Or is that different?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-19T05:59:39.095Z · LW(p) · GW(p)

To understand my position, first see this latest comment. It is that physical ontology is a subset of the true ontology, a bit like replacing a meaningful communication with a tree diagram. The tree structure is present in the original communication, and it inhabits everything to do with syntax and semantics, but the tree structure does not in itself contain the meaning.

Analogously, everything following "...which is formally..." is the abstracted description of consciousness, in mathematical/physical terms. The true ontology is the stuff about monadic intentionality with a subjective pole and an objective pole. My supposition is that this takes a finite number of bits to describe, and if you were to just talk about the structure and dynamics of those bits, solely in physical and computational terms, you would find yourself talking about (e.g.) nested qubit structures in the Hilbert space of entangled microtubular electrons. (That last is not a hypothesis that I advance with deadly seriousness and specificity, it's just usefully concrete.)

So if you want to talk about the basis of perception and knowledge, there are two levels available. There is the physical-computational level, and then the level of "true ontology". Perception and knowledge are really concepts at the deeper, truer level, because in truth they involve the "subjective" categories like intentionality, as well as the purely "objective" ones like structure and cause. But they will have their abstracted counterparts on the computational level of description.

In principle, the way we learn about the scientifically neglected subjective side of ontology is through phenomenology, i.e. introspection of an unusually systematic and rigorous sort, usually conducted in a doubting-Cartesian mode in which you put to one side the question of whether there is an external world causing your perceptions, and just focus on the nature of the perceptions themselves. Your question - what's going on when you perceive something as red, what's going on when you perceive fourness, and is there any difference - should be answered by introspective comparison of the two states.

In practice, any such introspection and comparison is likely to already be "theory-laden". This is one of the difficulties of the subject. Consider the very idea of intentionality, the idea that consciousness is all about a subject perceiving an object under an aspect. Now that I have the concept, it seems ubiquitously valid - every example of consciousness that I come upon, can be analyzed this way - and that offers a retroactive validation of the concept. But I can't say that I know how to get into a subjective state whereby I am agnostic about the existence of intentionality, and then have the intentional structure of consciousness forced upon me anyway, in the way that the existence of colors is impossible to deny. Maybe it becomes possible, at a higher level of phenomenological proficiency, to achieve a direct awareness of the reasons for believing in intentionality; or maybe it's a concept that is only ever validated in that retroactive way: once you have it, it becomes supremely plausible because of its analytical utility, but it's something that you have to hypothesize and "test" against the phenomenological "data", it's not something you can just "see directly" in the data.

My ideas about the difference between perceiving redness and perceiving fourness are on that level, at best; they are ideas that I picked up somehow, and which I can test against experience, but for which I don't have a subjective procedure which demonstrates them without presupposition, which is the epistemological gold standard for phenomenology...

A perceptual state of consciousness involves a "total object" which is "present" to a subject. This total object is what I called the "total instantaneous phenomenal state of affairs", by definition it's the union of all current objects of awareness; the "world" you are experiencing at a given moment. Some of these objects will be continua of qualia; for example, the total visual component of an experience. The subjective visual field is part of the world-object, along with other sensory continua. The subjective visual field isn't homogeneous, its hue, intensity, and value varies from location to location. This variation constitutes its form.

So far this is just a crude ontological analysis of the object end of an experience. When you ask how we perceive redness and fourness, you're also asking for an ontological analysis of how the object end relates to the subject end. In principle, that should derive from a phenomenological analysis of perceiving red and perceiving four... The trouble lies in distinguishing the component of the experience which is posited, from the component of the experience which is "given" - the part of the experience which is just there. I think fourness is posited on the basis of simpler local structural forms which are given, and I think there is a crude difference between red and, say, green, which is given, but more specific identification of colors requires conceptual synthesis, e.g. you have to notice that the shade of color is not just red, it's also dark, and then you can say it's a dark red.

Bertrand Russell and others talked about "knowledge by acquaintance" versus other forms of indirectly obtained knowledge; "knowledge by acquaintance" is the direct knowing that comes from direct awareness. So that which is given is known by acquaintance, and that which is posited is at best known to be consistent with experience. In this language, we know a shade of color as red-not-green by acquaintance, and we know that it is dark by acquaintance, but we know that it is dark red only by conceptual synthesis. And I think that a perception of fourness similarly arises from conceptual synthesis of more primitive facts that we know by acquaintance...

But one of the most challenging things is to say something convincing or even comprehensible about the direct awareness of objects by a subject. Should we treat qualia and this "total object" as part of the self, or as something external to the self that it's "aware of"? Is the awareness something that is caused by a particular relation between self and object, or is it the relation itself?

It's quite understandable why people prefer to focus on neurons, computation, and impersonal descriptions. If the physical side of my idea were ever validated, this would mean focusing on qubits, electron states, and so forth. But in the end, the vague and confusing subjective language of subject, object, awareness, acquaintance... would have to apply to entities and relations for which we also had a physical description. The "objective pole of the monadic intentionality" might correspond to "the union of all the leaves of the tree in the quantum data structure", and the "subjective pole" might be "the union of all the edges connected to the root node". (Undoubtedly that's not how it is, but again, concrete example for the purpose of discussion...)

You see intimations of this promised fusion between neurophysical, computational, and subjective ontologies when people have a feeling that it's all come together in their heads in a marvelous heap. "I am the computation, as well as the computer performing the computation!" might be how they express it, and behind this is a cognitive phenomenology in which there has been a miniature crossover and fusion of specific concepts from the different ontologies. I don't believe anyone has yet seen the truth of how it works, but the occasional illusion of insight gives us a foretaste of how the actual knowledge would feel, and meanwhile we need to keep switching back and forth between speculative synthesis and critical analysis, in order to make incremental progress. I just think getting to the answer requires a big leap in a new direction that's hard to convey.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-19T12:53:02.145Z · LW(p) · GW(p)

If you intended to answer my question, you might want to know that after reading your response, I still have no idea whether on your account perceiving some system as comprised of four things requires some ontologically distinct noncomputational something-or-other in the same way that perceiving a system as red does.

If you intended to use my question as a launching pad from which to expound your philosophy, or intended to be obscurantist, then you might not.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-19T15:20:14.522Z · LW(p) · GW(p)

I still have no idea whether on your account perceiving some system as comprised of four things requires some ontologically distinct noncomputational something-or-other in the same way that perceiving a system as red does.

Aha! Only now do I understand exactly what you were asking.

Recap: I complain that colors, such as redness, exist in reality, but not in physics as we describe it now, not even in the physics of the brain. So I just postulate that somewhere in the brain are entities, "manifolds of qualia", which will have a naturalistic, mathematical description as physical degres of freedom, but which in their full ontological reality are actually red.

So great, I've "saved the phenomenon", my ontology contains true color. But now I need an ontological account of awareness of color. Reality contains awareness of redness, just as much as it contains redness. This is why I started talking about "positing" and "givenness" and the subjective pole of intentionality - because that stuff is needed in order to say what awareness is.

The question about fourness starts out looking simpler than that. If you asked, Does your ontology contain redness, I can say, Yes; it contains qualia-manifolds, and they can be genuinely red. The question about fourness seems quite analogous. If there is a square in your visual field, do I claim that there is a platonic property of fourness inhabiting your manifold of visual qualia?

I believe in the existence of colors, but I am a skeptic about the existence of numbers. You might get away with a metaphysics in which there are no number-entities, just states of processes for counting. I'm not sure; if numbers are real, they might be properties of collections... but I'm a skeptic.

More importantly, my ontology of conscious states gives redness and fourness a different status, which allows me to be agnostic about whether or not there's a real "essence of fourness" inhabiting the visual sensation of a square. I hypothesize that the entity "redness" (more precisely, a particular shade of redness) is itself part of the entity, "awareness of that shade of redness"; but that "awareness of fourness" does not contain any correspondingly real "fourness". Analysed, it would be more like "awareness of a group of lines to which the concept of fourness is posited to apply", or perhaps "awareness of a group of lines together with the awareness that they are being categorized as a foursome by your nervous system". I'm willing to countenance a functionalist account of number "perception", but not of color perception.

I hope that this answer, if not intellectually satisfying, at least addresses the question. And now, back to work for a few days...

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-19T16:17:24.603Z · LW(p) · GW(p)

I believe in the existence of colors, but I am a skeptic about the existence of numbers. You might get away with a metaphysics in which there are no number-entities, just states of processes for counting. I'm not sure; if numbers are real, they might be properties of collections... but I'm a skeptic. [..] I'm willing to countenance a functionalist account of number "perception", but not of color perception.

OK, cool. That does indeed address the question, thank you.

When you have the time, I would be interested in your thoughts about what sort of evidence might convince you that a functionalist account of number "perception" is inadequate in the same way that (on your account) a functionalist account of color perception is.

comment by [deleted] · 2012-06-18T19:49:07.263Z · LW(p) · GW(p)

object pole of a monadic intentionality

Do you mean 'intensionality'? (and should we worry that the Chrome spell check recognizes neither of these words?)

it's an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state...

This sounds like you mean "the perception of color is a brain state". Am I missing something?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-19T06:10:44.822Z · LW(p) · GW(p)

I definitely mean intentionality with a T.

This sounds like you mean "the perception of color is a brain state". Am I missing something?

Again, see my latest comments, on the need to reintroduce at a fundamental level, ontological categories which have been excluded as subjective in order to build the scientific model of the world. I am hinting that, rather than intentionality being an abstraction from a mass of microphysical causal relations, the locus of consciousness is a specific, complex, but microphysically exactly bounded object, whose actual ontology includes intentionality, and for which the standard physical description would be the abstracted one.

That is, in reality the world consists of a causal network of "monads", some of which have extremely complex intentionality, but most of which are simple and are entirely pre- or non-intentional in their nature; but that the mathematical representation of this ontology is the "Machian quantum geometry" of "coherent tensor factors". Machian quantum geometry is not a well-defined mathematical concept, it's a rhetorical construct meant to suggest a quantum geometry based on matter (analogous to Ernst Mach's ideas). The monads are the "matter", the "geometry" encodes their immediate causal relations... This is handwaving meant to convey the gist of a way of thinking.

comment by [deleted] · 2012-06-18T20:18:59.701Z · LW(p) · GW(p)

This is hard to reply on. I really wish to not insult you, I really do, but I have to say some harsh words. I do not mean this as any form of personal attack.

You are confused, you are decieving yourself, you are pretending to be wise, you are trying to make yourself unconfused my moving your confusion into such a complicated framework that you loose track of it.

Halt, melt and catch fire. It is time to say a loud and resounding "whoops."

You seemingly have something you think is a great idea. I can discern that it is about ontology and something about a dichotomy between "physical things" and "mental? things" and how "color" and related concepts exists in neither? I am a reasonably intelligent man, and I can literally not make sense of what you are communicating. You yourself admit you cannot summarize your thoughts which is almost always a bad sign.

My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science.

What evidence do you have?

The attempt to describe reality entirely in terms of the existing concepts of those disciplines is necessarily incomplete, partly because it's all about X causing Y but not about what X and Y are.

This is literally false for almost any branch of computer- or natural science.

Consciousness gives us a glimpse of the "true nature" of at least one thing - itself, i.e. our own minds and therefore a glimpse of the true ontological depths.

How do you know that?

But rationalists and materialists who define their rationalism and materialism as "explaining everything in terms of the existing concepts" create intellectual barriers within themselves to the sort of progress which could come from this reflective, phenomenological approach.

This is either a strawman or a misunderstanding. What rational and reductionist inference in a lawful universe is about is saying "this looks complicated, I bet if we break it up the parts are simpler."

And you need to elaborate on " reflective, phenomenological approach." A lot.

Color does not exist in standard physical ontology - "colors" are supposed to be wavelengths, but a length is not a color; this is an example of the redefining of concepts that I mentioned in the previous long comment.

Color is a convenient shorthand for *counts* about 20 different (off the top of my head) computational or mathematically fundamental properties in physics (from chromodynamics to RGB to retinae responses to visual cortex neuro-activity, naming a few). It is a short comunicative entity, if you remove the mind that understands "color" in a context the syllables themselves are devoid of meaning.

I hereby define "Wakalixes" to mean "The frequency of oscilliation of space-propagating electromagnetic/electric oscilliation patterns as predicted by Maxwells equations." Is Wakalixes physically meaningless? I hope not. Is "Wakalixes" and arbitrary combination of syllables? Yes. When I from now on speak of Wakalixes I hope that we can use it to describe us some good old Maxwelian optics.

The point is, the word is not important, the redefinition not even. The important part is do you understand what I am saying? The Wakalixes of what I call "the color green" lies roughly between 1.734e-15 seconds and 1.901e-15 seconds.

The same can be said about me defining "Wokypokies" to mean "that kind of neurological response exhibited in the visual cortex of a heathly human when her retinae are exposed to a lot of Wakalixes between 1.267e-15 seconds and 2.468e-15 seconds (combinations of many different Wakalixes in complex patterns included)"

What part of "Wakalixes" and "Wokypokies" being things we normally refer to as "Color" do you have an objection against? What part of them is "not existing"?

This is actually an enormous clue about the nature of reality - color exists, it's part of a conscious state, therefore, if the brain is the conscious thing, then part of the brain must be where the color is.

Yes. Nothing new there, color is mere Wakalixes, it is only when a mind is involved it turns into Wokypokies!

But it sounds too weird,

No it doesn't.

so people settle for the usual paradoxical crypto-dualism: the material world consists of colorless particles, but the experience of color is in the brain somewhere, but that doesn't mean that anything in the brain is actually "colored".

I don't. Most of my friends don't. Also, quit using "color" and describe what you actually mean when you say "color" instead. It seems you are committing the standard philosophical fallacy of reasoning by homonyms.

This is a paradox,

No it isn't.

but it allows people to preserve the sense that they understand reality.

Strawman.

You asked for a simple exposition but that's just not easy.

Warning sign. Tread carefully, pinpoint inferential distances, write equations. Ontology in this universe is mathematically simple and I am good at maths, try me.

Certainly color ought to be a very simple example

It is, you are complicating it trying to think tongue in cheek big thoughts.

it's there in reality, it's not there in physics.

Are we even talking about Wakalixes or Wokypokies anymore?

But let me try to express my thoughts about the actual nature of color... it's an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state... it's almost better just to say nothing, until I've written some treatise which explains all my terms and their motivations.

So color isn't Wakalixes? Or it isn't Wokypokies? Can you write that in an equation? Why not just say "brain's exact microphysical state"? Wasn't monadic intentionality disproven? Are you really, really, really sure you are not overcomplicating stuff? Like, really sure?

Can color not just be Wakalixes or Wokypokies? Does this explaination of color let you make advance predictions about, say, blind people? Colourblindness? Whether we can agree on something being "green"?

Also, just how microphysical? Don't you need quantum gravity to describe it in suficcient detail? What about thermal noise?

I only made my original comment because you spontaneously expressed perplexity at the nature of "sentience", and I wanted to warn you against the false solutions that most rationalist-materialists will adopt, under the self-generated pressure to explain everything using just the narrow ontological concepts they already have.

Wow. Just wow. I legitimately feel sorry for you.

Go read the core sequences again. Especially Mysterious Questions and Humans Guide to words.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-19T04:33:48.061Z · LW(p) · GW(p)

You may see the unacknowledged dualism to which I refer, in the phrase "how an algorithm feels from inside". This implies that the facts about a sentient computer or sentient brain consist of (1) all the physical facts (locations of particles, or whatever the ultimate physical properties are) (2) "how it feels" to be the entity.

All those many definitions of color will be found on one side or the other side of that divide, usually on the "physical" side. The original meaning of color is usually shunted off to "experienced color", "subjective color", "color qualia", and so on. It ends up on the "feeling" side.

People generally notice at some point that the "color feelings" don't exist on the physical side. Nothing there is actually red, actually green, etc, in the original sense of those words. There are two main ways of dealing with this. Either you say that there aren't any real color feelings, there's just a feeling of color feelings that is somehow a side effect of information processing. Or, you say that subjective conscious experience is a terrible mystery, but one day we'll solve it somehow. (On this site, I nominate orthonormal as a representative of the first option, and Richard Kennaway of the second option.)

The third option, which I represent, says this: The only way to admit the existence of consciousness, and believe in physics, and not believe in dualism, is for the "feelings" to be the physical entities. They aren't "how it feels to be" some particular entity which is fundamentally defined in "non-feeling" terms, and which plays a certain causal role in the physical description of the world. The "feelings" themselves (the qualia, if you prefer that term) have to be causally active. The qualia must enter physics at a fundamental level, not in an emergent, abstracted, or epiphenomenal way.

They will have an abstracted mathematical description, in terms of their causal role, but it is wrong to say that they are nothing but That Which Plays A Certain Causal Role; yet this is all you can say about them, so long as you only allow physical, causal, and functional analysis. And this is the blue pill that most rationalists and materialists swallow. It keeps them on the merry-go-round, finding consciousness an unfathomable mystery which always eludes analysis, yet confident that eventually they will catch up and understand it using just their existing conceptual toolkit.

If you really want to understand it, you have to get off the merry-go-round, deal with consciousness on its own terms, and make a theory which by design contains it from the beginning. So you don't say: I can understand almost everything in terms of interacting elementary particles, but there's something elusive about the mind that I can't quite fathom... Instead you say: reality is that I exist, that I am experiencing these qualia, they come in certain types and forms, and the total gestalt of qualia that I experience evolves from moment to moment in a systematic way. Therefore, my theory of reality must contain an entity with all these attributes. How can I reconcile this fact with the instrumental success of a theory based on elementary particles?

If I were to tell you that I have a theory, according to which there's a single big long superstring that extends through a large part of the cortex (which is made up of ordinary, simple superstrings), and that the physical dynamics causes parts of the string to be knotted and unknotted like an Inca quipu tally device, and that this superstring is the "global workspace" of consciousness, you might be extremely skeptical, but you should at least understand what I'm saying, because it conforms to the familiar computational idea of consciousness. In the end I would just be saying, there's this physical thing, it undergoes various transformations of state, they have a computational interpretation, and oh yeah, our conscious experience is just how this alleged stringy computation "feels from the inside".

What I am saying is less than this and more than this. I am indeed saying that the physical correlate of consciousness in the brain is some physical subsystem that needs to be understood at a fundamental physical level; but I only have tentative, speculative, vague hypotheses about what it might be. But I am also saying that the "physical" description is only an abstracted one. The ontological reality is some sort of "structure", that probably deserves the name "self", and which contains the "qualia" (such as color in the primary sense of the word), and about which it is rather difficult to say anything directly, but this is why a person needs to study phenomenology - in order to develop rigor and fluency in their direct descriptions of subjective experience.

The historical roots of natural science, especially physics, include a deliberate methodological choice, to ignore "feelings", colors, thoughts, and the whole "subjective pole" of experience, in order to focus on quantity, causality, shape, space, and time. As a result, we have a scientific culture with a highly developed model of the world employing only those categories, and generations of individuals who are technically adept at thinking within those categories. But of course the subjective pole is still there in reality, although badly understood and conceptualized. In an attempt to think about it, this scientific culture tries to utilize the categories it knows about; and this gives the mystery of consciousness its peculiar flavor. We could explain everything else using just these categories; how can it not work here as well!

But in turning our attention to the subjective pole, we are confronting precisely that part of reality which was excluded from consideration in order to create the scientific paradigm. It has its own categories, to which we give inadequate names like qualia, intentionality, and subjectivity, which have been studied in scientifically shunned disciplines like "transcendental phenomenology" and "existential phenomenology"; and a real understanding of consciousness will not be obtained using just the scientifically familiar categories. We need an ontology which combines the familiar and the unfamiliar categories.

So if I am hard to understand, remember that I am not just stating an idiosyncratic hypothesis about the physical locus of consciousness, I am trying to hint at how that physical locus would be described in an ontology yet to come, in which the subjective ontology of qualia and the self is the primary way that we talk about it, and in which the physical description in terms of causal role is just a black-box abstraction away from this.

The usual materialist approach is the inverse: physics as we know it and conceptualize it now is fundamental, and psychology is an abstracted description of brain physics and brain computation. But the concepts of physics were already obtained by looking away from part of reality, in order to focus on another part; we aren't going to get the excluded part back by abstracting even further, from physics to computation.

Hopefully I have addressed most of your questions now, albeit indirectly.

Replies from: hairyfigment, None, Richard_Kennaway
comment by hairyfigment · 2012-06-19T07:05:37.232Z · LW(p) · GW(p)

People generally notice at some point that the "color feelings" don't exist on the physical side.

You're begging the question. I think you mean it doesn't seem obvious that a functional process is a feeling of color. You object to the fact that we don't recognize ourselves with certainty in this description. And yet you know that functionalism doesn't predict certain recognition. You know that it would seem, if not directly self-contradictory per Gödel and Löb, at least rather surprising for a mind in a functionalist world to find functionalism intuitively obvious when viewed from this angle.

But we don't have to speculate about the limits of self-consciousness in humans. We know for a fact that a lot of 'unconscious' processing takes place during perception. And orthonormal provides a credible account of how that could produce thoughts like yours.

I would actually say that if you think a functionally-human version of "Martha" would not have consciousness, your intuition is broken. So now we have an impasse between dueling intuitions. I suppose you could try to argue that one intuition seems more reliable than the other. Or we could just admit that they aren't reliable.

comment by [deleted] · 2012-06-19T10:36:20.226Z · LW(p) · GW(p)

There are no fundamental "feelings." The map of reality exists inside a brain which is a part of reality. Your modal logic and monad tensor algebra is unnecessary and meaningless. Everything you say has simpler explanations. You're begging the question, you show clear signs of self deception.

The universe is fundamentally simple, only in our map-of-the-universe do we pretend that things are different in order to compress the information.

You are misusing words. Like, basic errors.

And I am not going to take apart your wall-of-text philosophy. Come back when you have equations and predictions. Until then I am a material reductionist.

Halt, melt, catch fire. Now. Unless you Aumann up, this conversation is over.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-06-19T11:38:51.268Z · LW(p) · GW(p)

Unless you Aumann up

Aumann agreement is a cooperative process. Flying off the handle in the face of persistent disagreement does not look like part of such a process.

this conversation is over.

For you and Mitchell Porter, that is probably the best achievable outcome.

comment by Richard_Kennaway · 2012-06-19T11:53:26.140Z · LW(p) · GW(p)

Or, you say that subjective conscious experience is a terrible mystery, but one day we'll solve it somehow. (On this site, I nominate orthonormal as a representative of the first option, and Richard Kennaway of the second option.)

That accurately characterises my view. I'd just like to clarify it by saying that by "somehow, one day" I'm not pushing it off to Far-Far-Land (the rationalist version of Never-Never-Land). For all I know, "one day" could be today, and "we" could be you. I think it fairly unlikely, but that's just an expression of my ignorance, not my evidence. On the other hand, it could be as far off as electron microscopes from the ancient Greeks.

comment by Bugmaster · 2012-06-18T21:16:40.690Z · LW(p) · GW(p)

I think the confusion here stems from the fact that the word "color" has two different meanings.

When physicists talk about "color", what they mean is, "a specific wavelength of light". Let's call this "color-a".

When biologists or sociologists (or graphic artists) talk about "color", what they mean is, "a series of biochemical reactions in the brain which is usually the result of certain wavelengths of light hitting the retina". Let's call this "color-b".

Both "color-a" and "color-b" are physical phenomena, but they are distinct. As it happens, "color-b" is often caused by "color-a", but that isn't always the case. And we can often map "color-b" back onto a single "color-a", but that isn't always the case either; for example, the "color-b" we know as "brown" depends on local contrast, and thus does not have a single "color-a" cause.

This confusion in terms makes philosophical discussions confusing, but that's just an artifact of the English language. The concepts themselves are relatively simple, IMO.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-19T06:13:56.858Z · LW(p) · GW(p)

Using the distinction I introduce here, both your color-a and your color-b are on the "physics side", but there absolutely has to be color on the "feeling side" as well; that's the original meaning of color and the one that we know about directly.

Now, in real life I have a deadline to meet, and further communications will be delayed for a few days, if I'm wise...

Replies from: Bugmaster
comment by Bugmaster · 2012-06-19T07:42:33.576Z · LW(p) · GW(p)

I think you may be somewhat confused about Eliezer's terminology. You say:

You may see the unacknowledged dualism to which I refer, in the phrase "how an algorithm feels from inside". This implies that the facts about a sentient computer or sentient brain consist of (1) all the physical facts (locations of particles, or whatever the ultimate physical properties are) (2) "how it feels" to be the entity.

But the original article does not propose any kind of a dualism. Instead (IMO), it attempts to expose certain mental biases inherent to all humans, which are caused by the specific ways in which our neural hardware is configured: "Because we don't instinctively see our intuitions as "intuitions", we just see them as the world".

You say that...

People generally notice at some point that the "color feelings" don't exist on the physical side.

But people "generally notice" a lot of things, including the existence of gods and demons, and the shape of the Earth, which is flat. Just because people notice something, doesn't mean it's there (but it doesn't mean it's not there, either). You go on to say that materialists are...

...finding consciousness an unfathomable mystery which always eludes analysis...

But this just isn't true. We know a lot (though not everything) about how our consciousness operates; in fact, we can even observe some of it happening in real time under fMRI scans. Sure, some philosophers might wax poetic about the grand mystery of consciousness, but they are the same kinds of people who waxed poetic about the grand mystery of the heavens before Newtonian Mechanics was discovered.

Thus, I'm not convinced that...

...there absolutely has to be color on the "feeling side" as well...

...assuming of course that by "feeling side" you mean something distinct from brain-states. I could be wrong, of course; but since you are making the positive proposition about the existence of qualia, the burden of proof is on you.

comment by thomblake · 2012-06-18T21:00:07.327Z · LW(p) · GW(p)

I'll add it to my reading list.

Please don't, unless you would instead be watching reality TV or something. It's a complete waste of time. Heidegger speaks nonsense. He even makes up words and doesn't define them, so that he can speak more blatant nonsense.

Replies from: None
comment by [deleted] · 2012-06-18T21:02:14.103Z · LW(p) · GW(p)

Thanks. I missed an update on the recommender's credulity.

Replies from: Bugmaster
comment by Bugmaster · 2012-06-18T21:42:17.976Z · LW(p) · GW(p)

Well, to be entirely fair, the recommender did warn us that we would most likely hate the book, since it would require us to discard all of our cherished assumptions. Of course, there could be other reasons for hating it, as well...

I am kind of curious to take a look at it, to be honest; maybe I'll find a preview somewhere, when I have more time.

Replies from: thomblake
comment by thomblake · 2012-06-19T14:16:31.375Z · LW(p) · GW(p)

If you do read it, don't worry about getting it in the original German. I have it on good authority that German philosophy students are often given English translations of Heidegger because they're more readable.

Replies from: Richard_Kennaway, Oligopsony
comment by Richard_Kennaway · 2012-06-19T14:30:14.814Z · LW(p) · GW(p)

You might also try Heidegger: A Very Short Introduction. I have the book, although I don't think I ever read it; but it is short, deals mainly with the ideas (whatever they are) of "Being and Time", and the reviews on Amazon are favourable.

comment by Oligopsony · 2012-06-19T14:44:15.453Z · LW(p) · GW(p)

Having attempted Heidegger in English, I can only shudder at what the German versions are like.

comment by Bugmaster · 2012-06-18T21:03:43.156Z · LW(p) · GW(p)

What exactly makes both me and EY and presumably many others think sentience is a thing and distinguish "sentient" and "non-sentient"?

Wait, is "sentient" actually a thing ? I always thought that it was just a shorthand we use for describing a wide gamut of phenomena. Humans are quite sentient, chimps less so, dogs even less so, our current AIs even less sentient than that, and rocks aren't sentient at all. Am I wrong about this ?

Replies from: None
comment by [deleted] · 2012-06-18T21:06:28.796Z · LW(p) · GW(p)

That is what I try to discern: Is "sentient" a computational property or reducible to "why does my brain make me think it."

I agree with your statement, but I fail to see how to distinguish a "sentient" super-intelligence for a "non-sentient" one.

In general I am confused.

Replies from: Bugmaster, TheOtherDave
comment by Bugmaster · 2012-06-18T21:23:10.619Z · LW(p) · GW(p)

Is "sentient" a computational property or reducible to "why does my brain make me think it."

I'm not entirely sure what "why does my brain make me think it" means, but I've just noticed that I incorrectly used the word "sentient" in its science-fictional sense; I should've said something like "sapient", instead. The word sentient is often incorrectly used (f.ex. by me) to mean "capable of rational thought and communication", whereas the more correct definition is "capable of having subjective experiences".

As luck would have it, my previous comment applies to both meanings of the word, but still, they are distinct (though probably related). I apologize for the confusion.

comment by TheOtherDave · 2012-06-18T22:24:29.173Z · LW(p) · GW(p)

I fail to see how to distinguish a "sentient" super-intelligence for a "non-sentient" one.

Well, you could ask it whether it has subjective experience and trust its self-report. That's basically the same strategy we use for other intelligences, after all.

Replies from: None
comment by [deleted] · 2012-06-18T23:42:36.290Z · LW(p) · GW(p)

And we return to the back box of subjective experience.

Replies from: Bugmaster
comment by Bugmaster · 2012-06-18T23:44:31.039Z · LW(p) · GW(p)

What do you mean by "black box" ? If the AI (or alien or uplifted dolphin or whatever) tells me that it has subjective experiences, why shouldn't I take it at its word ?

Replies from: None
comment by [deleted] · 2012-06-19T00:36:21.759Z · LW(p) · GW(p)

Oh, I am not denying that they exist, just saying I don't know a solid theory of subjective experience. I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.

Replies from: Bugmaster
comment by Bugmaster · 2012-06-19T00:42:37.303Z · LW(p) · GW(p)

I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.

Sorry, I have trouble parsing this sentence. But in general, I don't think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient -- as long as that entity is also sapient, and capable of communication. If that's the case, then we can just ask it, and trust its word. If that's not the case, then I agree, we have a problem.

comment by [deleted] · 2021-06-28T17:17:01.272Z · LW(p) · GW(p)

Not that I disagree with the conclusion, but these are good arguments against democracy, humanism and especially the idea of a natural law, not against creating a sentient AI.