Using existing Strong AIs as case studies
post by ialdabaoth · 2012-10-16T22:59:56.428Z · LW · GW · Legacy · 14 commentsContents
14 comments
I would like to put forth the argument that we already have multiple human-programmed "Strong AI" operating among us, they already exhibits clearly "intelligent", rational, self-modifying goal-seeking behavior, and we should systematically study these entities before engaging in any particularly detailed debates about "designing" AI with particular goals.
They're called "Bureaucracies".
Essentially, a modern bureaucracy - whether it is operating as the decision-making system for a capitalist corporation, a government, a non-profit charity, or a political party, is an artificial intelligence that uses human brains as its basic hardware and firmware, allowing it to "borrow" a lot of human computational algorithms to do its own processing.
The fact that bureaucratic decisions can be traced back to individual human decisions is irrelevant - even within a human or computer AI, a decision can theoretically be traced back to single neurons or subroutines - the fact is that bureaucracies have evolved to guide and exploit human decision-making towards their own ends, often to the detriment of the individual humans that comprise said bureaucracy.
Note that when I say "I would like to put forth the argument", I am at least partially admitting that I'm speaking from hunch, rather than already having a huge collection of empirical data to work from - part of the point of putting this forward is to acknowledge that I'm not yet very good at "avalanche of empirical evidence"-style argument. But I would *greatly* appreciate anyone who suspects that they might be able to demonstrate evidence for or against this idea, presenting said evidence so I can solidify my reasoning.
As a "step 2": assuming the evidence weighs in towards my notion, what would it take to develop a systematic approach to studying bureaucracy from the perspective of AI or even xenosapience, such that bureaucracies could be either "programmed" or communicated with directly by the human agents that comprise them (and ideally by the larger pool of human stakeholders that are forced to interact with them?)
14 comments
Comments sorted by top scores.
comment by fubarobfusco · 2012-10-17T01:04:29.911Z · LW(p) · GW(p)
What sort of evidence would you expect to see if a bureaucracy were a "Strong AI" that you would expect not to see if it were not a "Strong AI"?
I am not asking you to come up with an "avalanche of empirical evidence". I am asking you to distinguish your claim from a floating belief, belief-as-attire, or the like. I am asking what — in anticipated experiences — would it mean for this idea to be true?
Replies from: ialdabaoth↑ comment by ialdabaoth · 2012-10-17T01:07:43.887Z · LW(p) · GW(p)
I would expect a bureaucracy to be capable of self-reflection and self-identity that exists independent of its constituent (human) decision-making modules. I would expect it to have a kind of "team spirit" or "internal integrity" that defines how it goes about solving problems, and which artificially constrains its decision tree from "purely optimal", towards "maintaining (my) personal identity".
In other words, I would expect the bureaucracy to have an identifiable "personality".
Replies from: Desrtopa, fubarobfusco↑ comment by Desrtopa · 2012-10-18T15:58:49.679Z · LW(p) · GW(p)
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it's reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.
Bureaucracies don't even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2012-10-18T20:20:30.456Z · LW(p) · GW(p)
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
That's very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I'm doing.
As I mentioned in the original post, I fully admit that I'm likely wrong - but presenting it in a "comment if you like" format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.
↑ comment by fubarobfusco · 2012-10-17T01:20:55.178Z · LW(p) · GW(p)
I'm not sure how to tell what sorts of groups of humans have self-reflection. For animals, including human infants, we can use the mirror test. How about for bureaucracies?
I'm not sure whether "team spirit" might be a projection in the minds of members or observers; or in particular a sort of belief-as-cheering for the psychological benefit of members (and opponents). How would we tell?
Likewise, how would we inquire into a bureaucracy's decision tree? I don't know how to ask a corporation to play chess.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2012-10-17T01:26:47.329Z · LW(p) · GW(p)
Bald assertion: the fact that "team spirit" might be a mere projection in the minds of members is as irrelevant to whether it causes self-reflection as the fact that "self-awareness" might be a mere consequence of synapse patterns.
Just because we're more intimately familiar with what "team spirit" feels like from the inside, than we are with what having your axons wired up to someone else's dendrites, doesn't mean that "team spirit" isn't part of an actual consciousness-generating process.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-10-17T01:35:39.158Z · LW(p) · GW(p)
"You can't prove it's not!" arguments...?
Recommended reading: the Mysterious Answers to Mysterious Questions sequence.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2012-10-17T01:42:08.246Z · LW(p) · GW(p)
No, I was presenting a potential counter to the idea that "I'm not sure whether 'team spirit' might be a projection in the minds of members or observers".
It might or might not be a projection in the minds of observers, but I don't think that it's relevant whether it is or not to the questions I'm asking, in the same sense that "are we conscious because we have a homunculus-soul inside of us, or because neurons give rise to consciousness?" isn't relevant to the question of "are we conscious?"
We know we are conscious as a bald fact, and we accept that other humans are conscious whenever we reject solipsism; we happen to be finding out the manner in which we are conscious as a result of our scientific curiosity.
But accepting an entity as "conscious" / "self-aware" / "sapient" does not require that we understand the mechanisms that generate its behavior; only that we recognize that it has behavior that fits certain criteria.
comment by bogus · 2012-10-17T17:59:53.001Z · LW(p) · GW(p)
This overall topic is known as collective intelligence, where the word "collective" is intended as a contrast to both individual intelligence and AI. There are some folks studying rationality in organizations and management, most notably including Peter Senge who first formulated the idea of a learning organization as a rough equivalent of "rationality" as such.
Replies from: wedrifid↑ comment by wedrifid · 2012-10-18T00:20:47.110Z · LW(p) · GW(p)
This overall topic is known as collective intelligence, where the word "collective" is intended as a contrast to both individual intelligence and AI.
It is not intended to contrast with Artificial Intelligence, it is intended to contrast with individual intelligence whether that individual intelligence is artificial or not. I note this because the subject of my Masters research in (narrow) AI was, in fact, focused on collective intelligence algorithms.
comment by Richard_Kennaway · 2012-10-17T12:23:35.346Z · LW(p) · GW(p)
Cosma Shalizi believes that this is the case. The Singularity (he says) has already happened, not merely in the vague sense that fire/printing/the industrial revolution/computers/whatever have changed things beyond recognition, but in the stronger sense that we are now in the grasp of "vast, inhuman distributed systems of information-processing" with "an implacable drive ... to expand, to entrain more and more of the world within their own sphere", and so on. UFAI (he does not use this term) is here, and we are living in its torture chamber. More here. See footnote 1 of the latter for a brief indication of where he's coming from on this.
A similar trope is present in pop neuro-/ev-psych. The inhuman system of information-processing that imprisons us while duping us into the illusion of freedom is our physical substrate and the evolutionary processes that created it. "We" can do nothing of ourselves, "we" are mere epiphenomenal scum on the surface of forces vaster and more alien than we can comprehend. We are as rats are in the world of humans: we can never amount to anything of real consequence, never have, never will.
It is also present in some forms of Christianity. God and the Devil are warring UFAIs and we are merely pieces in their incomprehensible game. "As flies to wanton boys are we to th' gods, They kill us for their sport."
Which is simply to point out that the idea that we are epiphenomenal scum on the surface of forces vaster and more alien than we can comprehend pops up in multiple contexts. Strangely, though, the alien forces are completely different each time.
comment by [deleted] · 2012-10-17T11:00:39.243Z · LW(p) · GW(p)
I would like to put forth the argument that we already have multiple human-programmed "Strong AI" operating among us, they already exhibits clearly "intelligent", rational, self-modifying goal-seeking behavior, and we should systematically study these entities before engaging in any particularly detailed debates about "designing" AI with particular goals.
What would be the benefit of studying them? Setting aside the question of whether bureaucracies are "Strong AI" (if someone doesn't agree then we can claim that they are "generalized strong AI" or something), what additional information about general laws of intelligence could be gleaned from studying two examples of intelligence rather than only the human one? There are already fields like management or organizational psychology that study the question of designing bureaucracies. Algorithms of corporations are very simple and consist mostly of calls to human black boxes.
comment by Risto_Saarelma · 2012-10-17T09:32:22.524Z · LW(p) · GW(p)
A creature's mind can be quite simple even though its limbs are very complex.