The UFAI among us

post by PhilGoetz · 2011-02-08T23:29:38.088Z · LW · GW · Legacy · 86 comments

Contents

86 comments

Completely artificial intelligence is hard.  But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions.  So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous.  Architectures that would distribute parts of tasks among different people.

Would you be less afraid of an AI like that?  Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?

Because you probably already are part of such an AI.  We call them corporations.

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.  In that way they resemble AI from the 1970s.  But they may provide insight into the behavior of AIs.  The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have.  But despite being very different from humans in this important way, they end up acting similar to us.

Corporations develop values similar to human values.  They value loyalty, alliances, status, resources, independence, and power.  They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies.  They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law).  This despite having different physicality and different needs.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident.  They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have.  But it should also follow that these ethics are too complex for us to perceive.

86 comments

Comments sorted by top scores.

comment by TheOtherDave · 2011-02-09T00:29:03.977Z · LW(p) · GW(p)

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

Another possibility is that individual humans occasionally influence corporations' behavior in ways that cause that behavior to occasionally reflect human values.

Replies from: PhilGoetz, Lightwave
comment by PhilGoetz · 2011-02-09T00:57:26.585Z · LW(p) · GW(p)

If that were the case, we would see specific humans influence corporations behavior in ways that would cause the corporations to implement those humans' goals and values, without preservation of deictic references. For instance, Joe works for Apple Computer. Joe thinks that giving money to Amnesty International is more ethical than giving money to Apple Computer. And Joe values giving money to Joe. We should therefore see corporations give lots of their money to charity, and to their employees. That would be Joe making Apple implement Joe's values directly. Joe's values say "I want me to have more money". Transfering that value extensionally to Apple would replace "me" with "Joe".

Instead, we see corporations act as if they had acquired values from their employees, but with preservation of deictic references. That means, every place in Joe's value where it says "me", Apple's acquired value says "me". So instead of "make money for Joe", it says "make money for Apple". That means the process is not consciously directed by Joe; Joe would preserve the extensional reference to "Joe", so as to satisfy his values and goals.

Replies from: CronoDAS, roystgnr, TheOtherDave
comment by CronoDAS · 2011-02-09T03:37:48.776Z · LW(p) · GW(p)

We should therefore see corporations give lots of their money to charity, and to their employees. That would be Joe making Apple implement Joe's values directly. Joe's values say "I want me to have more money".

Some people point to executive compensation at U.S. firms as evidence that many corporations have been "subverted" in exactly that way.

comment by roystgnr · 2011-02-09T19:37:26.340Z · LW(p) · GW(p)

It says "make money for Apple", which is a roundabout way of saying "make money for Apple's shareholders", who are the humans that most directly make up "Apple". Apple's employees are like Apple's customers - they have market power that can strongly influence Apple's behavior, but they don't directly affect Apple's goals. If Joe wants a corporation to give more money to charity, but the corporation incorporated with the primary goal of making a profit, that's not the decision of an employee (or even of a director; see "duty of loyalty"); that's the decision of the owners.

There's definitely a massive inertia in such decisions, but for good reason. If you bought a chunk of Apple to help pay for your retirement, you've got a ethically solid interest in not wanting Apple management to change it's mind after the fact about where its profits should go.

If you want to look for places where corporate goals (or group goals in government or other contexts) really do differ from the goals of the humans who created and/or nominally control them, I'd suggest starting with the "Iron Law of Bureaucracy".

comment by TheOtherDave · 2011-02-09T01:10:05.352Z · LW(p) · GW(p)

Agreed that if Apple is making a lot of money, and none of the humans who nominally influence Apple's decisions are making that money, that is evidence that Apple has somehow adopted the "make money" value independent of those humans' values.

Agreed that if Apple is not donating money to charity, and the humans who nominally influence Apple's decisions value donating money to charity, that is evidence that Apple has failed to adopt the "donate to charity" value from those humans.

comment by Lightwave · 2011-02-09T10:26:23.995Z · LW(p) · GW(p)

Also, corporations are restricted by governments, which implement other human-based values (different from pure profit), and they internalize these values (e.g. social/environmental responsibility) for (at the least) signaling purposes.

comment by benelliott · 2011-02-09T08:40:29.881Z · LW(p) · GW(p)

How similar are their values actually?

One obvious difference seems to be their position on the exploration/exploitation scale, most corporations do not get bored (the rare cases where they do seem to get bored can probably be explained by an individual executive getting bored, or by customers getting bored and the corporation managing to adapt).

Corporations also do not seem to have very much compassion for other corporations, while they do sometimes co-operate I have yet to see an example one corporation giving money to another, without anticipating some sort of gain from this action (any altruism they display towards humans is more likely caused by the individuals running things or done for signalling purposes, if they were really altruistic you would expect it to be towards each-other).

Do they really value independence and individuality? If so then why do they sometimes merge? I suppose you could say that the difference between they and humans is that they can merge while we can't, but I'm not convinced we would do so even if we could.

There may be superficial similarities between their values and ours, but it seems to me like we're quite different where it matters most. A hypothetical future which lacks creativity, altruism or individuality can be safely considered to have lost almost all of its potential value.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:14:53.164Z · LW(p) · GW(p)

Altruism and merging: Two very good points!

Altruism can be produced via evolution by kin selection or group selection. I don't think kin selection can work for corporations, for several reasons, including massive lateral transfer of ideas between corporations (so that helping a kin does not give a great boost to your genes), deliberate acquisition of memes predominating over inheritance, and the fact that corporations can grow instead of reproducing, and so are unlikely to be in a position where they have no growth potential themselves but can help a kin instead.

Can group selection apply to corporations?

What are the right units of selection / inheritance?

Replies from: timtyler
comment by timtyler · 2012-05-15T10:47:52.259Z · LW(p) · GW(p)

Altruism can be produced via evolution by kin selection or group selection. I don't think kin selection can work for corporations, for several reasons, including massive lateral transfer of ideas between corporations (so that helping a kin does not give a great boost to your genes), deliberate acquisition of memes predominating over inheritance, and the fact that corporations can grow instead of reproducing, and so are unlikely to be in a position where they have no growth potential themselves but can help a kin instead.

You don't think there's corporate parental care?!? IMO, corporate parental care is completely obvious. It is a simple instance of cultural kin selection. When a new corporation is spun off from an old one, there are often economic and resource lifelines - akin to the runners strawberry plants use to feed their offspring.

Lateral gene transfer doesn't much affect this. Growth competes with reproduction in many plants - and the line between the two can get blurred. It doesn't preclude parental care - as the strawberry runners show.

comment by NancyLebovitz · 2011-02-08T23:37:00.735Z · LW(p) · GW(p)

Should other large human organizations like governments and some religions also count as UFAIs?

Replies from: Eliezer_Yudkowsky, CronoDAS, Alexandros
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-09T18:43:46.915Z · LW(p) · GW(p)

Yes, I find it quite amusing that some people of a certain political bent refer to "corporations" as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don't think to extend the same anthropomorphism of demonic agency to large organizations that they're less interested in devalorizing, like governments and religions.

Replies from: None, NancyLebovitz
comment by [deleted] · 2011-02-11T09:01:18.668Z · LW(p) · GW(p)

Maybe those people are prioritising the things that seem to affect their lives? I can certainly see exactly the same argument about government or religion as about corporations, but currently the biggest companies (the Microsofts and Sonys and their like) seem to have more power than even some of the biggest governments.

Replies from: anonym
comment by anonym · 2011-02-13T20:18:21.539Z · LW(p) · GW(p)

There is also the issue of legal personality, which applies to corporations and not to governments or religions.

The corporation actually seems to me a great example of a non-biological, non-software optimization process, and I'm surprised at Eliezer's implicit assertion that there is no significant difference between corporations, governments, and religions with respect to their ability to be unfriendly optimization processes, other than that some people of a certain political bent have a bias to think about corporations differently than other institutions like governments and religions.

comment by NancyLebovitz · 2011-02-10T00:03:46.174Z · LW(p) · GW(p)

I think such folks are likely to trust governments too much. They're more apt to oppose specific religious agendas than to oppose religion as such, and I actually think that's about right most of the time.

comment by CronoDAS · 2011-02-08T23:44:05.694Z · LW(p) · GW(p)

Probably.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-08T23:50:37.895Z · LW(p) · GW(p)

Though I used the term UFAI more for emotional impact than out of belief in its accuracy. We shouldn't assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That's a rhetorical flourish, not a documented fact.

Replies from: Nick_Tarleton, wedrifid, Dorikka, PhilGoetz
comment by Nick_Tarleton · 2011-02-12T08:30:58.675Z · LW(p) · GW(p)

We shouldn't assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That's a rhetorical flourish, not a documented fact.

Neither; it's the conclusion of a logical argument (which is, yes, weaker than a documented fact).

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T03:58:53.069Z · LW(p) · GW(p)

Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn't true, or even close to true.

Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.

I think you're using the word "assume" here to mean something more like, "We should not build AIs without FAI methodology." That's a very very different statement! That's a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.

Replies from: wedrifid
comment by wedrifid · 2011-02-14T04:18:26.923Z · LW(p) · GW(p)

You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability.

No, Nick is not saying that.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-20T19:51:15.014Z · LW(p) · GW(p)

Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. "Assume" means "assign 100% probability". What other meaning did you have in mind?

comment by wedrifid · 2011-02-12T08:34:44.938Z · LW(p) · GW(p)

That's a rhetorical flourish, not a documented fact.

Nothing indicates a rhetorical flourish like the phrase 'rhetorical flourish'.

comment by Dorikka · 2011-02-09T00:30:53.963Z · LW(p) · GW(p)

We shouldn't assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI.

Why not? It's an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.

Replies from: David_Gerard, PhilGoetz
comment by David_Gerard · 2011-02-09T09:01:14.802Z · LW(p) · GW(p)

Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.

comment by PhilGoetz · 2011-02-09T00:43:43.142Z · LW(p) · GW(p)

If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.

Replies from: Dorikka
comment by Dorikka · 2011-02-09T00:50:37.825Z · LW(p) · GW(p)

I say 'may be slightly overcautious' contingent on it being wrong -- I'm saying that if it is wrong, it's a sort of wrong which will result in less loss in utility than would being wrong in the other direction.

If you're an agent with infinite computing power, you can investigate all hypotheses further to make sure that you're right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.

Replies from: sark
comment by sark · 2011-02-09T18:06:13.469Z · LW(p) · GW(p)

The erring on the side of caution only enters when you have to make a decision. Your pre-action estimate should be clean of this.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:03:01.377Z · LW(p) · GW(p)

You should not err on the side of caution if you are a Bayesian expectation-maximizer!

But I think what you're getting at, which is the important thing, is that people say "Assume X" when they really mean "My computation of the expected value times probability over all possible outcomes indicates X is likely, and I'm too lazy to remember the details, or I think you're too stupid to do the computation right; so I'm just going to cache 'assume X' and repeat that from now on". They ruin their analysis because they're lazy, and don't want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.

Replies from: wedrifid
comment by wedrifid · 2011-02-14T04:22:36.351Z · LW(p) · GW(p)

Then the lazy analysis done with poor information becomes dogma. As in the example above.

I downvoted this sentence.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-20T19:48:09.848Z · LW(p) · GW(p)

Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.

Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.

I can't express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.

Replies from: wedrifid
comment by wedrifid · 2011-02-20T23:16:27.884Z · LW(p) · GW(p)

explain why you downvoted

Weasel words used to convey unnecessary insult.

comment by PhilGoetz · 2011-02-14T03:59:18.311Z · LW(p) · GW(p)

Proof that conformist mindless dogma is alive and well at LW...

comment by Alexandros · 2011-02-09T09:09:01.617Z · LW(p) · GW(p)

Funny you should mention that. Just yesterday I added on my list of articles-to-write one by the title of "Religions as UFAI". In fact, I think the comparison goes much deeper than it does for corporations.

Replies from: timtyler
comment by timtyler · 2012-05-15T11:02:46.280Z · LW(p) · GW(p)

Some corporations may become machine intelligences. Religions - probably not so much.

comment by Unnamed · 2011-02-09T03:23:00.617Z · LW(p) · GW(p)

Unlike programmed AIs, corporations cannot FOOM. This leaves them with limited intelligence and power, heavily constrained by other corporations, government, and consumers.

The corporations that have come the closest to FOOMing are known as monopolies, and they tend to be among the least friendly.

Replies from: RolfAndreassen, PhilGoetz
comment by RolfAndreassen · 2011-02-09T04:18:31.107Z · LW(p) · GW(p)

corporations cannot FOOM.

Is this obvious? True, the timescale is not seconds, hours, or even days. But corporations do change their inner workings, and they have also been known to change the way they change their inner workings. I suggest that if a corporation of today were dropped into the 1950s, and operated on 1950s technology but with modern technique, it would rapidly outmaneuver its downtime competitors; and that the same would be true for any gap of fifty years, back to the invention of the corporation in the Middle Ages.

Replies from: wedrifid
comment by wedrifid · 2011-02-09T07:33:42.253Z · LW(p) · GW(p)

Is this obvious?

I suggest it is - for anything but the most crippled definition of "FOOM".

Replies from: Will_Newsome, NancyLebovitz
comment by Will_Newsome · 2011-02-09T11:52:24.098Z · LW(p) · GW(p)

Right, FOOM by its onomatopoeic nature suggest a fast recursion, not a million-year-long one.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2011-02-09T18:49:13.470Z · LW(p) · GW(p)

I am suggesting that a ten-year recursion time is fast. I don't know where you got your million years; what corporations have been around for a million years?

comment by NancyLebovitz · 2011-02-12T10:19:00.837Z · LW(p) · GW(p)

I'm inclined to agree-- there are pressures in a corporation to slow improvement rather than to accelerate it.

Any organization which could beat that would be extremely impressive but rather hard to imagine.

comment by PhilGoetz · 2011-02-14T04:09:25.337Z · LW(p) · GW(p)

This is true, but not relevant to whether we can use what we know about corporations and their values to infer things about AIs and their values.

Replies from: wedrifid
comment by wedrifid · 2011-02-14T04:15:05.904Z · LW(p) · GW(p)

This is true, but not relevant to whether we can use what we know about corporations and their values to infer things about AIs and their values.

It is relevant. It means you can infer a whole lot less about what capabilities an AI have and also about how much effort an AI will likely spend on self improvement early on. The payoffs and optimal investment strategy for resources are entirely different.

comment by Costanza · 2011-02-09T01:48:35.349Z · LW(p) · GW(p)

The SIAI is a "501(c)(3) nonprofit organization." Such organizations are sometimes called nonprofit corporations. Is SIAI also an unfriendly AI? If not, why not?

P.S. I think corporations exist mostly for the purpose of streamlining governmental functions that could otherwise be strucured in law, although with less efficiency. Like taxation, and financial liability, and who should be able to sue and be sued. Corporations, even big hierarchical organizations like multinationals, are simply not structured with the complexity of Searles' Chinese Room.

comment by Dorikka · 2011-02-09T00:38:50.037Z · LW(p) · GW(p)

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

I don't understand why you think that the rest of your post seems to suggest this. It appears to me that you're proposing that human (terminal?) values are universal to all intelligences at our level of intelligence on the basis that humans and corporations share values, but this doesn't hold up because corporations are composed of humans, so I think that the natural state would be for them to value human values.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-09T00:52:01.265Z · LW(p) · GW(p)

I figured someone would say that, and it is a hypothesis worth considering, but I think it needs justification. Corporations are composed of humans, but they don't look like humans, or eat the things humans eat, or espouse human religions. Corporations are especially human-like in their values, and that needs explaining. The goals of a corporation don't overlap with the values of its employees. Values and goals are highly intertwined. I would not expect a corporation to acquire values from its employees without also acquiring their goals; and they don't acquire their employees' goals. They acquire goals that are analogous to human goals; but eg IBM does not have the goal "help Frieda find a husband" or "give Joe more money".

Replies from: timtyler
comment by timtyler · 2011-02-09T01:58:12.105Z · LW(p) · GW(p)

Both humans and corporations want more money. Their goals at least overlap.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:27:05.506Z · LW(p) · GW(p)

The corporation wants the corporation to have more money, and Joe wants Joe to have more money. Those are the same goals internally, but because the corporation's goal says "ACME Corporation" where Joe's says "Joe", it means the corporation didn't acquire Joe's goals via lateral transfer.

Replies from: timtyler
comment by timtyler · 2011-02-14T08:40:52.623Z · LW(p) · GW(p)

Normally, the corporation wants more money - because it was built by humans - who themselves want more money. They build the corporation to want to make money itself - and then want to pay them a wage - or dividends.

If the humans involved originally wanted cheese, the corporation would want cheese too. I think by considering this sort of thought experiment, it is possible to see that the human goals do get transferred across.

comment by knb · 2011-02-10T22:59:01.981Z · LW(p) · GW(p)

I don't think it is useful to call Ancient Egypt a UFAI, even though they ended up tiling the desert in giant useless mausoleums at an extraordinary cost in wealth and human lives. Similarly, the Aztecs fought costly wars to capture human slaves, most of whom were then wasted as blood sacrifices to the gods.

If any human group can be UFAI, then does the term UFAI have any meaning?

Replies from: Nornagest
comment by Nornagest · 2011-02-10T23:23:40.929Z · LW(p) · GW(p)

My understanding is that the human cost of the Ancient Egyptian mausoleum industry is now thought to be relatively modest. The current theory, supported by the recent discovery of workers' cemeteries, is that the famous monuments were generally built by salaried workers in good health, most likely during the agricultural off-season.

Definitely expensive, granted, but as a status indicator and ceremonial institution they've got plenty of company in human behavior.

There's some controversy over (ETA: the scale of) the Aztec sacrificial complex as well, but since that's entangled with colonial/anticolonial ideology I'd assume anything you hear about it is biased until proven otherwise.

Replies from: knb
comment by knb · 2011-02-11T01:52:07.911Z · LW(p) · GW(p)

There is no debate over whether the Aztecs engaged in mass human sacrifice. The main disagreement amongst academics is over the scale. The Aztecs themselves claimed sacrifices of over 40,000 people, but they obviously had good reason to lie (to scare enemies). Spanish and pre-columbian Aztec sources agree that human sacrifice was widespread amongst the Aztecs.

Replies from: Nornagest
comment by Nornagest · 2011-02-11T01:59:49.042Z · LW(p) · GW(p)

You're quite right; I should have been more explicit. Edited.

comment by XiXiDu · 2011-02-09T11:46:33.021Z · LW(p) · GW(p)

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

By human values we mean how we treat things that are not part of the competitive environment.

The greatness of a nation and its moral progress can be judged by the way its animals are treated.

-- Mahatma Gandhi

Obviously a paperclip maximizer wouldn't punch you in the face if you could destroy it. But if it is stronger than all other agents and doesn't expect to ever having to prove its benevolence towards lesser agents, then there'll be no reason to care about them? The only reason I could imagine for a psychopathic agent to care about agents that are less powerful is if there is some benefit in being friendly towards them. For example if there are a lot of superhuman agents out there and general friendliness enables cooperation and makes you less likely to be perceived as a threat and subsequently allows you to use less resources to fight.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:11:03.198Z · LW(p) · GW(p)

By human values we mean how we treat things that are not part of the competitive environment.

I don't think I mean that. I also don't know where you're going with this observation.

Replies from: wedrifid
comment by wedrifid · 2011-02-14T04:28:48.996Z · LW(p) · GW(p)

I also don't know where you're going with this observation.

Roughly, that you can specify human values by supplying a diff from optimal selfish competition.

comment by blogospheroid · 2011-02-09T05:33:00.311Z · LW(p) · GW(p)

Another point to consider would be my Imperfect levers article and this one. I believe that the organizations that show the first ability to foom would foom effectively and spread their values around. This is not in any way, new. I, of indian origin, am writing in english and share more values with some californian transhumanists than with my neighbours. If not for the previous fooms of the british empire, the computer revolution and the internet, this would not have been possible.

The question is how close to sociopathic rationality are any of these organizations. Almost all of them exhibit omohundro's basic drives. I would disagree with the premise that alliances, status, power, resources are basic human values. They are instrumental values, subsets of the basic drives.

In organizations where a lot of decisions are being made on a mechanical basis, it is possible that some mechanism just takes over as long as it continues satisfying the incentive/hitting the button.

Replies from: Morendil
comment by Morendil · 2011-02-09T16:45:00.200Z · LW(p) · GW(p)

Almost all of them exhibit omohundro's basic drives.

This remark deserves an article of its own, mapping each of Omohundro's claims to the observed behaviour of corporations.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:16:43.822Z · LW(p) · GW(p)

I can't even find what Omohundro you're talking about using Google.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-02-14T04:56:57.084Z · LW(p) · GW(p)

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

I don't know why google didn't work for you-- I used "omohundro's basic drives" and a bunch of links came up.

comment by timtyler · 2011-02-09T02:03:45.613Z · LW(p) · GW(p)

Common instrumental values are in the air today.

The more values are found to be instrumental, the more the complexity of value thesis is eroded.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:07:02.896Z · LW(p) · GW(p)

What particular instrumental values are you thinking of?

comment by Risto_Saarelma · 2011-02-09T09:13:11.826Z · LW(p) · GW(p)

Charlie Stross seems to share this line of thought

We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden.

comment by sfb · 2011-02-09T05:51:23.394Z · LW(p) · GW(p)

I was expecting a post questioning who/what is really behind this project to make paperclips invisible.

Replies from: Blueberry
comment by Blueberry · 2011-02-09T10:43:42.955Z · LW(p) · GW(p)

Well, it's clear who benefits. Tiling the universe with invisible paperclips is less noticeable and less likely to start raising concerns.

comment by PhilGoetz · 2011-02-25T06:02:18.339Z · LW(p) · GW(p)

Michael Vassar raised some of the same points in his talk at H+, 2 weeks before I posted this.

comment by false_vacuum · 2011-02-09T02:22:22.983Z · LW(p) · GW(p)

Corporations (and governments) are not usually regarded as sharing human values by those who consider the question. This brief blog post is a good example. I would certainly argue that the 'U' is appropriate; but then I tend to regard 'UFAI' as meaning 'the complement of FAI in mind space'.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:18:30.818Z · LW(p) · GW(p)

Those people are considering a different question, which is, "Do corporations treat humans the way humans treat humans?" Completely different question.

If corporations develop values that resemble those of humans by convergent evolution (which is what I was suggesting), we would expect them to treat humans the way humans treat, say, cattle.

comment by Matt_Simpson · 2011-02-09T00:45:09.971Z · LW(p) · GW(p)

Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

It seems more likely that corporations act like humans because corporations are ran by humans. I've yet to meet an alien CEO or board member!

edit: and then I realized Dorikka said it first.

edit: and TheOtherDave.

Replies from: Dorikka
comment by Dorikka · 2011-02-09T00:52:27.449Z · LW(p) · GW(p)

...I am laughing hard right now.

comment by Vladimir_Nesov · 2011-02-09T20:34:30.629Z · LW(p) · GW(p)

Is my grandma an Unfriendly AI?

Replies from: Alicorn
comment by Alicorn · 2011-02-09T20:50:30.946Z · LW(p) · GW(p)

Your grandma probably isn't artificial.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-09T20:55:18.992Z · LW(p) · GW(p)

She was designed by evolution, so could just as well be considered artificial. And did I mention the Unfriendly AI part?

Replies from: wedrifid
comment by wedrifid · 2011-02-10T11:03:09.141Z · LW(p) · GW(p)

She was designed by evolution, so could just as well be considered artificial.

Not when using the standard meanings of either of those words.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-10T11:27:41.314Z · LW(p) · GW(p)

But what do you mean by "meaning"? Not that naive notion, I hope?

Edit: This was a failed attempt at sarcasm, see the parenthetical in this comment.

Replies from: wedrifid
comment by wedrifid · 2011-02-10T11:43:04.471Z · LW(p) · GW(p)

But what do you mean by "meaning"? Not that naive notion, I hope?

Question: How many legs does a dog have if you call the tail a leg?

Answer: I don't care, your grandma isn't artificial just because you call natural artificial. Presenting a counter-intuitive conclusion based on basically redefining the language isn't "deep". Sometimes things are just simple.

Perhaps you have another point to make about the relative unimportance of the distinction between 'natural' and 'artificial' in the grand scheme of things? There is certainly a point to be made there, and one that could be made without just using the words incorrectly.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-10T14:02:51.242Z · LW(p) · GW(p)

There is certainly a point to be made there, and one that could be made without just using the words incorrectly.

But that would be no fun.

(For the perplexed: see No Evolutions for Corporations or Nanodevices. Attaching too many unrelated meanings to a word is a bad idea that leads to incorrect implicit inferences. Meaning is meaning, even if we don't quite know what it is, grandma and corporations are not Unfriendly AIs, and natural selection doesn't produce artificial things.)

Replies from: timtyler, PhilGoetz
comment by timtyler · 2012-05-15T10:55:44.136Z · LW(p) · GW(p)

natural selection doesn't produce artificial things.

It does, but indirectly.

comment by PhilGoetz · 2011-02-14T04:07:59.990Z · LW(p) · GW(p)

Corporations are artificial, and they are intelligent. Therefore, they are artificial intelligences.

(ADDED: Actually this is an unimportant semantic point. What's important is how much we can learn about something that we all agree we can call "AI", from corporations. Deciding this on the basis of whether you can apply the name "AI" to them is literally thinking in circles.)

comment by Emile · 2011-02-09T10:46:59.237Z · LW(p) · GW(p)

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.

I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.

If you were talking about say a computer system that balances water circulation in a network of pipes, and has a bunch of "local" subsystems with more-or-less reliable measures for flow, damage to the installation, leaks, power-efficiency of pumps, you might care less about things like which way the information flows as long as the overal system works well. You couldn't worry about whether a particular node had it's feeling hurt by the central node ignoring it's information (which may be because the central node has limited bandwidth, processing power, and has to deal with high undertainty about which nodes provide accurate information).

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:13:04.593Z · LW(p) · GW(p)

I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.

Corporations are not good at using bottom-up information for their own benefit. Many companies have many employees who could optimize their work better, or know problems that need to be solved; yet nothing is done about it, and there is no mechanism to propagate this knowledge upward, and no reward given to the employees if they transmit their knowledge or if they deal with the problem themselves.

comment by timtyler · 2011-02-09T01:56:01.449Z · LW(p) · GW(p)

The differences between: a 90% human 10% machine company...

...and a 10% human 90% machine company...

...may be instructive if viewed from this perspective.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-02-14T04:24:40.427Z · LW(p) · GW(p)

I don't understand what you're getting at.

My company has about 300 people, and 2500 computers. And the computers work all night. Are we 90% machine?

Replies from: timtyler
comment by timtyler · 2011-02-14T08:47:08.997Z · LW(p) · GW(p)

There are various ways of measuring. My proposal for a metric is here:

http://machine-takeover.blogspot.com/2009/07/measuring-machine-takeover.html

I propose weighing them:

There are a variety of ways of measuring how much of the resource pie is allocated to machines.

One way that would appeal to economists is to look at the cost of constructing machines - and compare that to the cost of constructing humans. That would give an estimate of how much society is willing to spend on these different elements of the biosphere.

Here I will advocate what I believe to be a simpler method of measuring the proportion of machines on the planet. I think we should weigh them.

...in particular, weighing their sensor, motor, and computing elements.

...so "no" - not yet.

comment by Will_Newsome · 2011-02-09T11:58:53.364Z · LW(p) · GW(p)

And yet it seems likely that your typical corporation, even your typical monopoly, does much more good for the world than a typical human would do with a similar amount of optimization power. Even billionaires who donate to charity probably don't generate as much utility for mankind as your typical aggressively run Chinese manufacturing company.

Replies from: PhilGoetz, benelliott
comment by PhilGoetz · 2011-02-14T04:05:40.550Z · LW(p) · GW(p)

Is this true if we compare corporations and people on a per-capitalization (or per-resource-ownership) basis? Maybe not, if we count the resources of the people making up the corporations.

comment by benelliott · 2011-02-10T13:33:40.601Z · LW(p) · GW(p)

This is quite non-intuitive, although I can see why it might be true. Do you have any evidence?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-02-10T14:38:04.499Z · LW(p) · GW(p)

I'm confused, and I would have to think very carefully about how to avoid double counting, and about coordinated decision problems, and blahhh. I'll read a good book on microeconomics in the next week and perhaps my intuitions will become better tuned.