Google could build a conscious AI in three months

post by derek shiller (derek-shiller) · 2022-10-01T13:24:10.752Z · LW · GW · 18 comments

18 comments

Comments sorted by top scores.

comment by Dagon · 2022-10-01T16:25:08.942Z · LW(p) · GW(p)

Does this summarize as "nobody has given a credible operational definition of consciousness, so whatever we do is unlikely to be accepted as conscious for quite some time"?

We're not ready, legally or socially, for potentially sentient artificial creatures to be created and destroyed at will for commercial purposes.

Can you specify what "ready" means?  We've been struggling with natural consciousnesses, both human and animal, for a long long time, and it's not obvious to me that artificial consciousness can avoid any of that pain.

Replies from: derek-shiller
comment by derek shiller (derek-shiller) · 2022-10-01T17:45:41.158Z · LW(p) · GW(p)

We've been struggling with natural consciousnesses, both human and animal, for a long long time, and it's not obvious to me that artificial consciousness can avoid any of that pain.

You're right, but there are a couple of important differences:

  • There is widespread agreement on the status of many animals. People believe most tetrapods are conscious. The terrible stuff we do to them is done in spite of this.
  • We have a special opportunity at the start of our interactions with AI systems to decide how we're going to relate to them. It is better to get things right off the bat then to try to catch up (and shift public opinion) decades later.
  • We have a lot more potential control over artificial systems than we do over natural creatures. It is possible that very simple changes and low-cost changes could make a huge difference to their welfare (or whether they have any.)
Replies from: Dagon
comment by Dagon · 2022-10-01T23:09:50.348Z · LW(p) · GW(p)

There is widespread agreement on the status of many animals.

Only to the extent that “conscious” doesn’t carry any weight or expectation of good treatment. There is very little agreement on what an animal’s level of consciousness means in terms of life or happiness priority compared to any human.

We have a special opportunity at the start of our interactions with AI systems to decide how we're going to relate to them

I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?

We have a lot more potential control over artificial systems than we do over natural creatures

Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree (I can’t strongly disagree either, though, because I don’t really understand what this means)

Replies from: derek-shiller
comment by derek shiller (derek-shiller) · 2022-10-02T01:31:44.032Z · LW(p) · GW(p)

I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?

I think that our treatment of animals is a historical problem. If there were no animals, if everyone was accustomed to eating vegetarian meals, and then you introduced chickens into the world, I believe people wouldn't be inclined to stuff them into factory farms and eat their flesh. People do care about animals where they are not complicit in harming them (whaling, dog fighting), but it is hard for most people to leave the moral herd and it is hard to break with tradition. The advantage of thinking about digital minds is that traditions haven't been established yet and the moral herd doesn't know what to think. There is no precedence or complicity in ill treatment. That is why it is easier for us to decide how to relate with them.

Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree.

In order to make a natural creature happy and healthy, you need to work with its basic evolution-produced physiology and psychology. You've got to feed it, educate it, socialize it, accommodate its arbitrary needs and neurotic tendencies. We would likely be able to design the psychology and physiology of artificial systems to our specifications. That is what I mean by having a lot more potential control.

Replies from: Dagon
comment by Dagon · 2022-10-02T15:04:34.689Z · LW(p) · GW(p)

Ah, I think we have a fundamental disagreement about how the majority of humans think about animals and each other.  If the world were vegetarian, and someone created chickens, I think it would NOT lead to many chickens leading happy chicken lives.  It would either be an amusing one-time lab experiment (followed by death and extinction) or the discovery that they're darned tasty and very concentrated and portable nutrition elements, which leads to creating them for the primary purpose of food.

I'm not sure wireheading an AI (so it's happy no matter what) is any more (or less) acceptable than doing so to chickens (by evolving smaller brains and larger breasts).

comment by Garrett Baker (D0TheMath) · 2022-10-01T20:38:27.736Z · LW(p) · GW(p)

Of note: we have already written computer programs which current theories say are conscious.

https://www.aaai.org/Papers/Symposia/Fall/2007/FS-07-01/FS07-01-008.pdf

Replies from: bvbvbvbvbvbvbvbvbvbvbv
comment by bvbvbvbvbvbvbvbvbvbvbv · 2022-10-05T09:33:38.556Z · LW(p) · GW(p)

Thanks a lot! Would you mind pointing me to a direction that allows me to stay up to date on this niche? Also, do you have any contextual information to share about the paper you linked? Was it controversial for example?

Thanks!

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2022-10-05T17:20:26.573Z · LW(p) · GW(p)

It wasn’t controversial. People have theories about consciousness and neural architectures, and they use simulations to verify their theories produce predictions consistent with the real world.

For more, you can probably follow the authors of this paper on twitter or something, or look through their backlogs, and do the same for Stanislas Dehaene.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2022-10-05T17:20:52.560Z · LW(p) · GW(p)

It’s also not the first time someone has made such a program.

comment by Shiroe · 2022-10-01T16:51:56.507Z · LW(p) · GW(p)

Unless the conscious algorithm in question will experience states that are not valence-neutral, I see no issue with creating or destroying instances of it. The same applies to any other type of consciousness. It seems implausible to me that any of our known AI architectures could instantiate such non-neutral valences, even if they do seem plausibly able to instantiate other kinds of experiences (e.g. geometric impressions).

Replies from: derek-shiller
comment by derek shiller (derek-shiller) · 2022-10-01T17:50:20.968Z · LW(p) · GW(p)

I'm not particularly worried that we may harm AIs that do not have valenced states, at least in the near term. The issue is more over precedent and expectations going forward. I would worry about a future in which we create and destroy conscious systems willy-nilly because of how it might affect our understanding of our relationship to them, and ultimately to how we act toward AIs that do have morally relevant states. These worries are nebulous, and I very well might be wrong to be so concerned, but it feels risky to rush into things.

comment by green_leaf · 2022-10-01T17:49:43.769Z · LW(p) · GW(p)

One of philosophical insights showing the inside of the system doesn't matter to conscious states would be to consider that we can describe our conscious states to an outside observer, so what-we-call-consciousness has no parts unconnected to the output of the entire system.

Believing that an artificial consciousness has to conform to the computational architecture of the human brain (on a specific level of abstraction), would be unjustified anthropocentrism, no different from believing that a system can't be conscious without neural tissue.

Replies from: Shiroe
comment by Shiroe · 2022-10-01T20:56:07.484Z · LW(p) · GW(p)

What about a large look-up table that mapped conversation so far -> what to say next and was able to pass the Turing test? This program would have all the external signs of consciousness, but would you really describe it as a conscious being in the same way that you are?

Replies from: green_leaf
comment by green_leaf · 2022-10-03T06:14:32.978Z · LW(p) · GW(p)

That wouldn't fit into our universe (by about 2 metaorders of magnitude). But yes, that simple software would indeed have an equivalent consciousness, with the complexity almost completely moved from the algorithm to the data. There is no other option.

Replies from: Shiroe
comment by Shiroe · 2022-10-03T06:50:41.692Z · LW(p) · GW(p)

What would it be conscious of, though? Could it feel a headache when you gave it a difficult riddle? I don't think a look-up table can be conscious of anything except for matching bytes to bytes. Perhaps that corresponds to our experience of recognizing that two geometric forms are identical.

Replies from: green_leaf
comment by green_leaf · 2022-10-05T17:56:44.182Z · LW(p) · GW(p)

We're not conscious of internal computational processes at that level of abstraction (like matching bits). We're conscious of outside inputs, and of the transformations of the state-machine-which-is-us from one state to the next.

Recognizing two geometric forms are identical would correspond to giving whatever output we'd give in reaction to that.

comment by GunZoR (michael-ellingsworth) · 2022-10-01T21:18:43.685Z · LW(p) · GW(p)

If it's impossible in principle to know whether any AI really has qualia, then what's wrong with simply using the Turing test as an ultimate ethical safeguard? We don't know how consciousness works, and possibly we won't ever (e.g., mysterianism might obtain). But certainly we will soon create an AI that passes the Turing test. So seemingly we have good ethical reasons just to assume that any agent that passes the Turing test is sentient — this blanket assumption, even if often unwarranted from the aspect of eternity, will check our egos and thereby help prevent ethical catastrophe. And I don't see that any more sophisticated ethical reasoning around AI sentience is or ever will be needed. Then the resolution of what's really happening inside the AI will simply continually increase over time; and, without worry, we'll be able to look back and perhaps see where we were right and wrong. Meanwhile, we can focus less on ethics and more on alignment.

Replies from: derek-shiller
comment by derek shiller (derek-shiller) · 2022-10-02T01:16:13.781Z · LW(p) · GW(p)

Turing test is sentient

I'm not sure why we should think that the Turing test provides any evidence regarding consciousness. Dogs can't pass the test, but that is little reason to think that they're not conscious. Large language models might be able to pass the test before long, but it looks like they're doing something very different inside, and so the fact that they are able to hold conversations is little reason to think they're anything like us. There is a danger with being too conservative. Sure, assuming sentience may avoid causing unnecessary harms, but if we mistakenly believe some systems are sentient when they are not, we may waste time or resources for the sake of their (non-existent) welfare.

Your suggestion may simply be that we have nothing better to go on, and we've got to draw the line somewhere. If there is no right place to draw the line, then we might as well pick something. But I think there are better and worse place to draw the line. And I don't think our epistemic situation is quite so bad. We may not ever be completely sure which precise theory is right, but we can get a sense of which theories are contenders by continuing to explore the human brain and develop existing theories, and we can adopt policies that respect the diversity of opinion.

Meanwhile, we can focus less on ethics and more on alignment.

This strikes me as somewhat odd, as alignment and ethics are clearly related. On the one hand, there is the technical question of how to align an AI to specific values. But there is also the important question of which values to align. How we think about digital consciousness may come be extremely important to that.