wireless-heading, value drift and so on

post by h-H · 2011-04-16T06:45:54.847Z · LW · GW · Legacy · 7 comments

Contents

7 comments

A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.

What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.

but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?

discuss.

7 comments

Comments sorted by top scores.

comment by Normal_Anomaly · 2011-04-16T15:10:34.130Z · LW(p) · GW(p)

Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

That's an easy one. I value humans, I don't value paperclips.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely?

According to EY's CEV document, CEV does this. It lets/makes our values drift in the way we would want them to drift.

Replies from: h-H
comment by h-H · 2011-04-16T18:45:18.386Z · LW(p) · GW(p)

very smart people have issues with CEV, example: http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/

and as far as I remember CEV was sort of abandoned a while ago by the community.

and yes, you value humans, others in the not so distant future might not given the possibility of body/brain modification. anyway, the gist of my argument is that CEV doesn't seem to work if there is not going to be much coherence of all of humanity's extrapolated volition's-a point that's already been made clear in previous threads by many people-what I'm trying to add to that is to point out the overwhelming possibility of there being 'alien minds' among us before a FAI could be built.

I also raised the question that If body modification is widely available, is it ok to prevent people from acquiring an 'alien' set of morals, one that would later on be a possible hindrance to CEV-like proposals? how can we tell if its alien or not in the first place?

comment by prase · 2011-04-17T17:19:56.851Z · LW(p) · GW(p)

Downvoted for formatting. Capitalisation of the first letters in a sentence really increases readability.

comment by nazgulnarsil · 2011-04-17T03:11:16.055Z · LW(p) · GW(p)

I recommend Diaspora by Egan OP.

comment by fubarobfusco · 2011-04-16T07:11:58.110Z · LW(p) · GW(p)

yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

Who says that the point is to tile the universe with humans, or even with environments merely capable of supporting humans? Even evolution, the god of mindless cruelty and death, doesn't just print out the same thing generation after generation. Why would you expect a Friendly system to do worse than evolution?

Replies from: randallsquared
comment by randallsquared · 2011-04-16T16:33:07.177Z · LW(p) · GW(p)

Why would you expect a Friendly system to do worse than evolution?

I believe that h-H was under the impression that human-centric Friendliness would call that "better" than evolution.

comment by mwengler · 2011-04-19T13:44:56.709Z · LW(p) · GW(p)

I think these are good questions. I don't trust "morality." The morality that we do have we have because it "works." Make your ideas pay rent, make your morality pay rent. We cooperate by the millions and billions because of language and morality and the fact that we have very largely domesticated ourselves.

I imagine the Kurzweilian singularity to be more like what will happen than free standing AIs engineered with CEV or something else from the ground up. Its a lot easier to "own" an intelligence when it is physically part of you and dependent on you. It is a lot easier to design enhancements to a already complex and adaptable intelligence. Perhaps more important than calling it easier is to note it is a lot less expensive..

Humanity has already fractured into groups and the win for the species has been that to a large extent we still function as a single intelligence. Enhanced memory and communication should reinforce that win, at least in the medium term. Perhaps there is a day when the Galactic Center band of man-machines has been separated from the Spiral Arm band of man-machines for long enough for the bonds between them to have atrophied, perhaps not. We have to leave SOME problems for our enhanced offspring to solve, though, and we have plenty of reason to believe they will be better equipped to solve these than are we.